Google Professional Machine Learning Engineer - Professional Machine Learning Engineer Exam

Question #6 (Topic: Single Topic)
You work for an online retail company that is creating a visual search engine. You have set up an end-to-end ML pipeline on Google Cloud to classify whether an
image contains your company's product. Expecting the release of new products in the near future, you configured a retraining functionality in the pipeline so that
new data can be fed into your ML models. You also want to use AI Platform's continuous evaluation service to ensure that the models have high accuracy on your
test dataset. What should you do?
A. Keep the original test dataset unchanged even if newer products are incorporated into retraining. B. Extend your test dataset with images of the newer products when they are introduced to retraining. C. Replace your test dataset with images of the newer products when they are introduced to retraining. D. Update your test dataset with images of the newer products when your evaluation metrics drop below a pre-decided threshold.
Answer: B
Question #7 (Topic: Single Topic)
You need to build classification workflows over several structured datasets currently stored in BigQuery. Because you will be performing the classification several
times, you want to complete the following steps without writing code: exploratory data analysis, feature selection, model building, training, and hyperparameter
tuning and serving. What should you do?
A. Configure AutoML Tables to perform the classification task. B. Run a BigQuery ML task to perform logistic regression for the classification. C. Use AI Platform Notebooks to run the classification model with pandas library. D. Use AI Platform to run the classification model job configured for hyperparameter tuning.
Answer: A
Question #8 (Topic: Single Topic)
You work for a public transportation company and need to build a model to estimate delay times for multiple transportation routes. Predictions are served directly
to users in an app in real time. Because different seasons and population increases impact the data relevance, you will retrain the model every month. You want to
follow Google-recommended best practices. How should you configure the end-to-end architecture of the predictive model?
A. Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model. B. Use a model trained and deployed on BigQuery ML, and trigger retraining with the scheduled query feature in BigQuery. C. Write a Cloud Functions script that launches a training and deploying job on AI Platform that is triggered by Cloud Scheduler. D. Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model.
Answer: A
Question #9 (Topic: Single Topic)
You are developing ML models with AI Platform for image segmentation on CT scans. You frequently update your model architectures based on the newest
available research papers, and have to rerun training on the same dataset to benchmark their performance. You want to minimize computation costs and manual
intervention while having version control for your code. What should you do?
A. Use Cloud Functions to identify changes to your code in Cloud Storage and trigger a retraining job. B. Use the gcloud command-line tool to submit training jobs on AI Platform when you update your code. C. Use Cloud Build linked with Cloud Source Repositories to trigger retraining when new code is pushed to the repository. D. Create an automated workflow in Cloud Composer that runs daily and looks for changes in code in Cloud Storage using a sensor.
Answer: C
Question #10 (Topic: Single Topic)
Your team needs to build a model that predicts whether images contain a driver's license, passport, or credit card. The data engineering team already built the
pipeline and generated a dataset composed of 10,000 images with driver's licenses, 1,000 images with passports, and 1,000 images with credit cards. You now
have to train a model with the following label map: [`˜drivers_license', `˜passport', `˜credit_card']. Which loss function should you use?
A. Categorical hinge B. Binary cross-entropy C. Categorical cross-entropy D. Sparse categorical cross-entropy
Answer: C
Download Exam
Page: 2 / 68
Total 339 questions