Designing an Azure Data Solution v1.0 (DP-201)

Page:    1 / 5   
Total 70 questions

HOTSPOT -
You are designing a recovery strategy for your Azure SQL Databases.
The recovery strategy must use default automated backup settings. The solution must include a Point-in time restore recovery strategy.
You need to recommend which backups to use and the order in which to restore backups.
What should you recommend? To answer, select the appropriate configuration in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:




Answer :

Explanation:
All Basic, Standard, and Premium databases are protected by automatic backups. Full backups are taken every week, differential backups every day, and log backups every 5 minutes.
References:
https://azure.microsoft.com/sv-se/blog/azure-sql-database-point-in-time-restore/

You are developing a solution that performs real-time analysis of IoT data in the cloud.
The solution must remain available during Azure service updates.
You need to recommend a solution.
Which two actions should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Deploy an Azure Stream Analytics job to two separate regions that are not in a pair.
  • B. Deploy an Azure Stream Analytics job to each region in a paired region.
  • C. Monitor jobs in both regions for failure.
  • D. Monitor jobs in the primary region for failure.
  • E. Deploy an Azure Stream Analytics job to one region in a paired region.


Answer : BC

Explanation:
Stream Analytics guarantees jobs in paired regions are updated in separate batches. As a result there is a sufficient time gap between the updates to identify potential breaking bugs and remediate them.
Customers are advised to deploy identical jobs to both paired regions.
In addition to Stream Analytics internal monitoring capabilities, customers are also advised to monitor the jobs as if both are production jobs. If a break is identified to be a result of the Stream Analytics service update, escalate appropriately and fail over any downstream consumers to the healthy job output. Escalation to support will prevent the paired region from being affected by the new deployment and maintain the integrity of the paired jobs.
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-job-reliability

A company is developing a mission-critical line of business app that uses Azure SQL Database Managed Instance.
You must design a disaster recovery strategy for the solution/
You need to ensure that the database automatically recovers when full or partial loss of the Azure SQL Database service occurs in the primary region.
What should you recommend?

  • A. Failover-group
  • B. Azure SQL Data Sync
  • C. SQL Replication
  • D. Active geo-replication


Answer : A

Explanation:
Auto-failover groups is a SQL Database feature that allows you to manage replication and failover of a group of databases on a SQL Database server or all databases in a Managed Instance to another region (currently in public preview for Managed Instance). It uses the same underlying technology as active geo- replication. You can initiate failover manually or you can delegate it to the SQL Database service based on a user-defined policy.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-auto-failover-group

HOTSPOT -
A company has locations in North America and Europe. The company uses Azure SQL Database to support business apps.
Employees must be able to access the app data in case of a region-wide outage. A multi-region availability solution is needed with the following requirements:
-> Read-access to data in a secondary region must be available only in case of an outage of the primary region.
-> The Azure SQL Database compute and storage layers must be integrated and replicated together.
You need to design the multi-region high availability solution.
What should you recommend? To answer, select the appropriate values in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:




Answer :

Explanation:

Box 1: Standard -
The following table describes the types of storage accounts and their capabilities:



Box 2: Geo-redundant storage -
If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.
Note: If you opt for GRS, you have two related options to choose from:
GRS replicates your data to another data center in a secondary region, but that data is available to be read only if Microsoft initiates a failover from the primary to secondary region.
Read-access geo-redundant storage (RA-GRS) is based on GRS. RA-GRS replicates your data to another data center in a secondary region, and also provides you with the option to read from the secondary region. With RA-GRS, you can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.

References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs

A company is designing a solution that uses Azure Databricks.
The solution must be resilient to regional Azure datacenter outages.
You need to recommend the redundancy type for the solution.
What should you recommend?

  • A. Read-access geo-redundant storage
  • B. Locally-redundant storage
  • C. Geo-redundant storage
  • D. Zone-redundant storage


Answer : C

Explanation:
If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn’t recoverable.
References:
https://medium.com/microsoftazure/data-durability-fault-tolerance-resilience-in-azure-databricks-95392982bac7

A company is evaluating data storage solutions.
You need to recommend a data storage solution that meets the following requirements:
-> Minimize costs for storing blob objects.
-> Optimize access for data that is infrequently accessed.
-> Data must be stored for at least 30 days.
-> Data availability must be at least 99 percent.
What should you recommend?

  • A. Premium
  • B. Cold
  • C. Hot
  • D. Archive


Answer : B

Explanation:
Azure’s cool storage tier, also known as Azure cool Blob storage, is for infrequently-accessed data that needs to be stored for a minimum of 30 days. Typical use cases include backing up data before tiering to archival systems, legal data, media files, system audit information, datasets used for big data analysis and more.
The storage cost for this Azure cold storage tier is lower than that of hot storage tier. Since it is expected that the data stored in this tier will be accessed less frequently, the data access charges are high when compared to hot tier. There are no additional changes required in your applications as these tiers can be accessed using APIs in the same manner that you access Azure storage.
References:
https://cloud.netapp.com/blog/low-cost-storage-options-on-azure

A company has many applications. Each application is supported by separate on-premises databases.
You must migrate the databases to Azure SQL Database. You have the following requirements:
-> Organize databases into groups based on database usage.
-> Define the maximum resource limit available for each group of databases.
You need to recommend technologies to scale the databases to support expected increases in demand.
What should you recommend?

  • A. Read scale-out
  • B. Managed instances
  • C. Elastic pools
  • D. Database sharding


Answer : C

Explanation:
SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price.
You can configure resources for the pool based either on the DTU-based purchasing model or the vCore-based purchasing model.
Incorrect Answers:
D: Database sharding is a type of horizontal partitioning that splits large databases into smaller components, which are faster and easier to manage.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool

You have an on-premises MySQL database that is 800 GB in size.
You need to migrate a MySQL database to Azure Database for MySQL. You must minimize service interruption to live sites or applications that use the database.
What should you recommend?

  • A. Azure Database Migration Service
  • B. Dump and restore
  • C. Import and export
  • D. MySQL Workbench


Answer : A

Explanation:
You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using the newly introduced continuous sync capability for the Azure
Database Migration Service (DMS). This functionality limits the amount of downtime that is incurred by the application.
References:
https://docs.microsoft.com/en-us/azure/mysql/howto-migrate-online

You plan to deploy an Azure SQL Database instance to support an application. You plan to use the DTU-based purchasing model.
Backups of the database must be available for 30 days and point-in-time restoration must be possible.
You need to recommend a backup and recovery policy.
What are two possible ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

  • A. Use the Premium tier and the default backup retention policy.
  • B. Use the Basic tier and the default backup retention policy.
  • C. Use the Standard tier and the default backup retention policy.
  • D. Use the Standard tier and configure a long-term backup retention policy.
  • E. Use the Premium tier and configure a long-term backup retention policy.


Answer : DE

Explanation:
The default retention period for a database created using the DTU-based purchasing model depends on the service tier:
-> Basic service tier is 1 week.
-> Standard service tier is 5 weeks.
-> Premium service tier is 5 weeks.
Incorrect Answers:
B: Basic tier only allows restore points within 7 days.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-long-term-retention

You are designing an Azure Databricks cluster that runs user-defined local processes.
You need to recommend a cluster configuration that meets the following requirements:
-> Minimize query latency
-> Reduce overall costs
-> Maximize the number of users that can run queries on the cluster at the same time.
Which cluster type should you recommend?

  • A. Standard with Autoscaling
  • B. High Concurrency with Auto Termination
  • C. High Concurrency with Autoscaling
  • D. Standard with Auto Termination


Answer : C

Explanation:
High Concurrency clusters allow multiple users to run queries on the cluster at the same time, while minimizing query latency. Autoscaling clusters can reduce overall costs compared to a statically-sized cluster.
Incorrect Answers:
A, D: Standard clusters are recommended for a single user.
References:
https://docs.azuredatabricks.net/user-guide/clusters/create.html https://docs.azuredatabricks.net/user-guide/clusters/high-concurrency.html#high-concurrency https://docs.azuredatabricks.net/user-guide/clusters/terminate.html https://docs.azuredatabricks.net/user-guide/clusters/sizing.html#enable-and-configure-autoscaling

Page:    1 / 5   
Total 70 questions