Perform Big Data Engineering on Microsoft Cloud Services v1.0 (70-776)

Page:    1 / 6   
Total 83 questions

HOTSPOT -
You use Microsoft Visual Studio to develop custom solutions for customers who use Microsoft Azure Data Lake Analytics.
You install the Data Lake Tools for Visual Studio.
You need to identify which tasks can be performed from Visual Studio and which tasks can be performed from the Azure portal.
What should you identify for each task? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:




Answer :

References:
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-store https://github.com/toddkitta/azure-content/blob/master/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md

DRAG DROP -
You need to copy data from Microsoft Azure SQL Database to Azure Data Lake Store by using Azure Data Factory.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:




Answer :

References:
https://azure.microsoft.com/en-us/blog/creating-big-data-pipelines-using-azure-data-lake-and-azure-data-factory/ https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database#invoking-stored-procedure-for-sql-sink

You plan to use Microsoft Azure Data Factory to copy data daily from an Azure SQL data warehouse to an Azure Data Lake Store.
You need to define a linked service for the Data Lake Store. The solution must prevent the access token from expiring.
Which type of authentication should you use?

  • A. OAuth
  • B. service-to-service
  • C. Basic
  • D. service principal


Answer : D

References:
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database#invoking-stored-procedure-for-sql-sink

You are developing an application by using the Microsoft .NET SDK. The application will access data from a Microsoft Azure Data Lake folder.
You plan to authenticate the application by using service-to-service authentication.
You need to ensure that the application can access the Data Lake folder.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Configure the application to use the OAuth 2.0 token endpoint.
  • B. Configure the application to use the application ID, authentication key, and tenant ID.
  • C. Configure the application to use the application ID and redirect URI.
  • D. Register an Azure Active Directory app that uses the Native application type.
  • E. Register an Azure Active Directory app that uses the Web app/API application type.
  • F. Assign the Azure Active Directory app permission to the Data Lake Store folder.


Answer : ABE

References:
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory

You plan to use Microsoft Azure Event Hubs to ingest data. You plan to use Azure Stream Analytics to analyze the data in real time and to send the output directly to Azure Data Lake Store.
You discover duplicate records in the output data.
What is a possible cause of the duplicate records?

  • A. There is a connectivity issue between the data source and the event hub.
  • B. There are multiple deliveries to the output adapter that writes the output events.
  • C. The Stream Analytics output adapter writes the output events transactionally.
  • D. There are connectivity issues with the output adapter.


Answer : D

References:
https://msdn.microsoft.com/en-us/azure/stream-analytics/reference/event-delivery-guarantees-azure-stream-analytics#duplicate-records

You have the following process:
-> A CSV file is ingested by Microsoft Azure Stream Analytics.
Scoring is performed by Azure Machine Learning.


-> Stream Analytics returns sentiment scoring through a web service endpoint.
-> Stream Analytics creates an output blob.
You need to view the output of the scoring operation and to evaluate the throughput to the Machine Learning models.
Which monitoring data should you evaluate from the Azure portal?

  • A. the request count of Stream Analytics
  • B. the request count of Machine Learning
  • C. the event count of Stream Analytics
  • D. the event count of Machine Learning


Answer : B

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-machine-learning-integration-tutorial

You have a Microsoft Azure Stream Analytics job.
You are debugging event information manually.
You need to view the event data that is being collected.
Which monitoring data should you view for the Stream Analytics job?

  • A. query
  • B. outputs
  • C. scale
  • D. inputs


Answer : A

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-job-diagram-with-metrics

You are building a Microsoft Azure Stream Analytics job definition that includes inputs, queries, and outputs.
You need to create a job that automatically provides the highest level of parallelism for the compute instances.
What should you do?

  • A. Configure event hubs and blobs to use the PartitionKey field as the partition ID.
  • B. Define the number of input partitions to equal the number of output partitions.
  • C. Set the partition key for the inputs, queries, and outputs to use the same partition folders. Configure the queries to use different partition keys.
  • D. Set the partition key for the inputs, queries, and outputs to use the same partition folders. Configure the queries to use uniform partition keys.


Answer : B

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization

DRAG DROP -
You are building a data pipeline that uses Microsoft Azure Stream Analytics.
Alerts are generated when the aggregate of data streaming in from devices during a minute-long window matches the values in a rule.
You need to retrieve the following information:
-> The event ID
-> The device ID
-> The application ID that runs the service
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:




Answer :

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-threshold-based-rules

DRAG DROP -
You plan to develop a solution for real-time sentiment analysis of Twitter data.
You need to create a Microsoft Azure Stream Analytics job query to count the number of tweets during a period.
Which Window function should you use for each requirement? To answer, drag the appropriate functions to the correct requirements. Each function may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:




Answer :

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functions

You plan to deploy a Microsoft Azure Stream Analytics job to filter multiple input streams from IoT devices that have a total data flow of 30 MB/s.
You need to calculate how many streaming units you require for the job. The solution must prevent lag.
What is the minimum number of streaming units required?

  • A. 3
  • B. 10
  • C. 30
  • D. 300


Answer : A

References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-streaming-unit-consumption

You have an on-premises deployment of Active Directory named contoso.com.
You plan to deploy a Microsoft Azure SQL data warehouse.
You need to ensure that the data warehouse can be accessed by contoso.com users.
Which two components should you deploy? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Azure Active Directory B2C
  • B. Azure Active Directory
  • C. Azure AD Connect
  • D. Cloud App Discovery
  • E. Azure AD Privileged Identity Management
  • F. Azure Information Protection


Answer : BC

References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-aad-authentication-configure https://docs.microsoft.com/en-us/azure/active-directory/hybrid/whatis-hybrid-identity

DRAG DROP -
You have a Microsoft Azure SQL data warehouse.
Users discover that reports running in the data warehouse take longer than expected to complete.
You need to review the duration of the queries and which users are running the queries currently.
Which dynamic management view should you review for each requirement? To answer, drag the appropriate dynamic management views to the correct requirements. Each dynamic management view may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:




Answer :

References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql?view=sql-server-2017 https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql?view=sql-server-2017

HOTSPOT -
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario.
You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
For each table in LocalDW, you create a table in AzureDW.
On the on-premises network, you have a Data Management Gateway.
Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
End of repeated scenario.
How should you configure the storage to archive the source data? To answer, select the appropriate options in the answer area.
Hot Area:




Answer :

References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario.
You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
For each table in LocalDW, you create a table in AzureDW.
On the on-premises network, you have a Data Management Gateway.
Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
End of repeated scenario.
You need to configure an activity to move data from blob storage to Azure DW.
What should you create?

  • A. an automation runbook
  • B. a linked service
  • C. a pipeline
  • D. a dataset


Answer : B

References:
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse

Page:    1 / 6   
Total 83 questions