Google Associate Data Practitioner - Google Cloud Certified - Associate Data Practitioner Exam

Question #6 (Topic: Exam A)
You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?
A. Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery. B. Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery. C. Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors. D. Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.
Answer: B
Question #7 (Topic: Exam A)
You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements. What should you do?
A. Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements. B. Move objects to different storage classes based on their age and access patterns. Use Cloud Key Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements. C. Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects. D. Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.
Answer: D
Question #8 (Topic: Exam A)
You work for an ecommerce company that has a BigQuery dataset that contains customer purchase history, demographics, and website interactions. You need to build a machine learning (ML) model to predict which customers are most likely to make a purchase in the next month. You have limited engineering resources and need to minimize the ML expertise required for the solution. What should you do?
A. Use BigQuery ML to create a logistic regression model for purchase prediction. B. Use Vertex AI Workbench to develop a custom model for purchase prediction. C. Use Colab Enterprise to develop a custom model for purchase prediction. D. Export the data to Cloud Storage, and use AutoML Tables to build a classification model for purchase prediction.
Answer: A
Question #9 (Topic: Exam A)
You are designing a pipeline to process data files that arrive in Cloud Storage by 3:00 am each day. Data processing is performed in stages, where the output of one stage becomes the input of the next. Each stage takes a long time to run. Occasionally a stage fails, and you have to address
the problem. You need to ensure that the final output is generated as quickly as possible. What should you do?
A. Design a Spark program that runs under Dataproc. Code the program to wait for user input when an error is detected. Rerun the last action after correcting any stage output data errors. B. Design the pipeline as a set of PTransforms in Dataflow. Restart the pipeline after correcting any stage output data errors. C. Design the workflow as a Cloud Workflow instance. Code the workflow to jump to a given stage based on an input parameter. Rerun the workflow after correcting any stage output data errors. D. Design the processing as a directed acyclic graph (DAG) in Cloud Composer. Clear the state of the failed task after correcting any stage output data errors.
Answer: D
Question #10 (Topic: Exam A)
Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future. What should you do?
A. Create authorized views in the team’s Google Cloud project that is only accessible by the team. B. Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members. C. Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset. D. Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.
Answer: B
Download Exam
Page: 2 / 15
Total 72 questions