Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB v1.0 (DP-420)

Page:    1 / 4   
Total 56 questions

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.
Does this meet the goal?

  • A. Yes
  • B. No


Answer : B

Explanation:
Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.
The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.
The following diagram represents the data flow and components involved in the solution:


Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.
Does this meet the goal?

  • A. Yes
  • B. No


Answer : A

Explanation:
The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.
The following diagram represents the data flow and components involved in the solution:


Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution

Case Study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview -
Litware, Inc. is a United States-based grocery retailer. Litware has a main office and a primary datacenter in Seattle. The company has 50 retail stores across the

United States and an emerging -
online presence. Each store connects directly to the internet.
Existing environment. Cloud and Data Service Environments.
Litware has an Azure subscription that contains the resources shown in the following table.


Each container in productdb is configured for manual throughput.
The con-product container stores the company's product catalog data. Each document in con-product includes a con-productvendor value. Most queries targeting the data in con-product are in the following format.
SELECT * FROM con-product p WHERE p.con-productVendor - 'name'
Most queries targeting the data in the con-productVendor container are in the following format
SELECT * FROM con-productVendor pv
ORDER BY pv.creditRating, pv.yearFounded
Existing environment. Current Problems.
Litware identifies the following issues:
Updates to product categories in the con-productVendor container do not propagate automatically to documents in the con-product container.
Application updates in con-product frequently cause HTTP status code 429 "Too many requests". You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.

Requirements. Planned Changes -
Litware plans to implement a new Azure Cosmos DB Core (SQL) API account named account2 that will contain a database named iotdb. The iotdb database will contain two containers named con-iot1 and con-iot2.
Litware plans to make the following changes:
Store the telemetry data in account2.
Configure account1 to support multiple read-write regions.
Implement referential integrity for the con-product container.
Use Azure Functions to send notifications about product updates to different recipients.
Develop an app named App1 that will run from all locations and query the data in account1.
Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.
Requirements. Business Requirements
Litware identifies the following business requirements:
Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.
Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
Minimize the number of firewall changes in the retail stores.
Requirements. Product Catalog Requirements
Litware identifies the following requirements for the product catalog:
Implement a custom conflict resolution policy for the product catalog data.
Minimize the frequency of errors during updates of the con-product container.
Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.
Trigger the execution of two Azure functions following every update to any document in the con-product container.

You configure multi-region writes for account1.
You need to ensure that App1 supports the new configuration for account1. The solution must meet the business requirements and the product catalog requirements.
What should you do?

  • A. Set the default consistency level of account1 to bounded staleness.
  • B. Create a private endpoint connection.
  • C. Modify the connection policy of App1.
  • D. Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor containers.


Answer : D

Explanation:
App1 queries the con-product and con-productVendor containers.
Note: Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.
Scenario:
Develop an app named App1 that will run from all locations and query the data in account1.
Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.
Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.
Incorrect Answers:
A:
Bounded staleness relates to writes. App1 only do reads.
Note: Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

Case Study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview -
Litware, Inc. is a United States-based grocery retailer. Litware has a main office and a primary datacenter in Seattle. The company has 50 retail stores across the

United States and an emerging -
online presence. Each store connects directly to the internet.
Existing environment. Cloud and Data Service Environments.
Litware has an Azure subscription that contains the resources shown in the following table.


Each container in productdb is configured for manual throughput.
The con-product container stores the company's product catalog data. Each document in con-product includes a con-productvendor value. Most queries targeting the data in con-product are in the following format.
SELECT * FROM con-product p WHERE p.con-productVendor - 'name'
Most queries targeting the data in the con-productVendor container are in the following format
SELECT * FROM con-productVendor pv
ORDER BY pv.creditRating, pv.yearFounded
Existing environment. Current Problems.
Litware identifies the following issues:
Updates to product categories in the con-productVendor container do not propagate automatically to documents in the con-product container.
Application updates in con-product frequently cause HTTP status code 429 "Too many requests". You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.

Requirements. Planned Changes -
Litware plans to implement a new Azure Cosmos DB Core (SQL) API account named account2 that will contain a database named iotdb. The iotdb database will contain two containers named con-iot1 and con-iot2.
Litware plans to make the following changes:
Store the telemetry data in account2.
Configure account1 to support multiple read-write regions.
Implement referential integrity for the con-product container.
Use Azure Functions to send notifications about product updates to different recipients.
Develop an app named App1 that will run from all locations and query the data in account1.
Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.
Requirements. Business Requirements
Litware identifies the following business requirements:
Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.
Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
Minimize the number of firewall changes in the retail stores.
Requirements. Product Catalog Requirements
Litware identifies the following requirements for the product catalog:
Implement a custom conflict resolution policy for the product catalog data.
Minimize the frequency of errors during updates of the con-product container.
Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.
Trigger the execution of two Azure functions following every update to any document in the con-product container.

You need to provide a solution for the Azure Functions notifications following updates to con-product. The solution must meet the business requirements and the product catalog requirements.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Configure the trigger for each function to use a different leaseCollectionPrefix
  • B. Configure the trigger for each function to use the same leaseCollectionName
  • C. Configure the trigger for each function to use a different leaseCollectionName
  • D. Configure the trigger for each function to use the same leaseCollectionPrefix


Answer : AB

Explanation:
leaseCollectionPrefix: when set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate
Azure Functions to share the same Lease collection by using different prefixes.
Scenario: Use Azure Functions to send notifications about product updates to different recipients.
Trigger the execution of two Azure functions following every update to any document in the con-product container.
Reference:
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger

HOTSPOT -
You have the indexing policy shown in the following exhibit.


Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:



Answer :

Explanation:
Box 1: ORDER BY c.name DESC, c.age DESC
Queries that have an ORDER BY clause with two or more properties require a composite index.
The following considerations are used when using composite indexes for queries with an ORDER BY clause with two or more properties:
✑ If the composite index paths do not match the sequence of the properties in the ORDER BY clause, then the composite index can't support the query.
✑ The order of composite index paths (ascending or descending) should also match the order in the ORDER BY clause.
✑ The composite index also supports an ORDER BY clause with the opposite order on all paths.
Box 2: At the same time as the item creation
Azure Cosmos DB supports two indexing modes:
✑ Consistent: The index is updated synchronously as you create, update or delete items. This means that the consistency of your read queries will be the consistency configured for the account.
✑ None: Indexing is disabled on the container.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. Upserts of items in container1 occur every three seconds.
You have an Azure Functions app named function1 that is supposed to run whenever items are inserted or replaced in container1.
You discover that function1 runs, but not on every upsert.
You need to ensure that function1 processes each upsert within one second of the upsert.
Which property should you change in the Function.json file of function1?

  • A. checkpointInterval
  • B. leaseCollectionsThroughput
  • C. maxItemsPerInvocation
  • D. feedPollDelay


Answer : D

Explanation:
With an upsert operation we can either insert or update an existing record at the same time.
FeedPollDelay: The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is
5,000 milliseconds, or 5 seconds.
Incorrect Answers:
A: checkpointInterval: When set, it defines, in milliseconds, the interval between lease checkpoints. Default is always after each Function call.
C: maxItemsPerInvocation: When set, this property sets the maximum number of items received per Function call. If operations in the monitored collection are performed through stored procedures, transaction scope is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch.
Reference:
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger

HOTSPOT -
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
The following is a sample of a document in container1.
{
"studentId": "631282",
"firstName": "James",
"lastName": "Smith",
"enrollmentYear": 1990,
"isActivelyEnrolled": true,
"address": {
"street": "",
"city": "",
"stateProvince": "",
"postal": "",
}
}
The container1 container has the following indexing policy.
{
"indexingMode": "consistent",
"includePaths": [
{
"path": "/*"
},
{
"path": "/address/city/?"
}
],
"excludePaths": [
{
"path": "/address/*"
},
{
"path": "/firstName/?"
}
]
}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:




Answer :

Explanation:

Box 1: Yes -
"path": "/*" is in includePaths.
Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended approach as it lets Azure Cosmos DB proactively index any new property that may be added to your model.

Box 2: No -
"path": "/firstName/?" is in excludePaths.

Box 3: Yes -
"path": "/address/city/?" is in includePaths
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

You have the following query.
SELECT * FROM ׁ
WHERE c.sensor = "TEMP1"

AND c.value < 22 -
AND c.timestamp >= 1619146031231
You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by the query.
What should you recommend?

  • A. a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
  • B. a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)
  • C. a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
  • D. a composite index for (sensor ASC, value ASC, timestamp ASC)


Answer : A

Explanation:
If a query has a filter with two or more properties, adding a composite index will improve performance.
Consider the following query:
SELECT * FROM c WHERE c.name = ג€Timג€ and c.age > 18
In the absence of a composite index on (name ASC, and age ASC), we will utilize a range index for this query. We can improve the efficiency of this query by creating a composite index for name and age.
Queries with multiple equality filters and a maximum of one range filter (such as >,<, <=, >=, !=) will utilize the composite index.
Reference:
https://azure.microsoft.com/en-us/blog/three-ways-to-leverage-composite-indexes-in-azure-cosmos-db/

HOTSPOT -
You have a database in an Azure Cosmos DB SQL API Core (SQL) account that is used for development.
The database is modified once per day in a batch process.
You need to ensure that you can restore the database if the last batch process fails. The solution must minimize costs.
How should you configure the backup settings? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:




Answer :

HOTSPOT -
You have an Azure Cosmos DB Core (SQL) API account named account1.
You have the Azure virtual networks and subnets shown in the following table.


The vnet1 and vnet2 networks are connected by using a virtual network peer.
The Firewall and virtual network settings for account1 are configured as shown in the exhibit.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:



Answer :

Explanation:

Box 1: Yes -
VM1 is on vnet1.subnet1 which has the Endpoint Status enabled.

Box 2: No -
Only virtual network and their subnets added to Azure Cosmos account have access. Their peered VNets cannot access the account until the subnets within peered virtual networks are added to the account.

Box 3: No -
Only virtual network and their subnets added to Azure Cosmos account have access.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint

You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored in Azure Key Vault.
You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.
Which three permissions should you enable in the access policy? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. Wrap Key
  • B. Get
  • C. List
  • D. Update
  • E. Sign
  • F. Verify
  • G. Unwrap Key


Answer : ABG

Explanation:
To Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault:
Add an access policy to your Azure Key Vault instance:
1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select Access Policies from the left menu:


2. Select + Add Access Policy.
3. Under the Key permissions drop-down menu, select Get, Unwrap Key, and Wrap Key permissions:

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk

You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
  • B. "key.converter": "org.apache.kafka.connect.json.JsonConverter"
  • C. "key.converter": "io.confluent.connect.avro.AvroConverter"
  • D. "connect.cosmos.containers.topicmap": "iot#telemetry"
  • E. "connect.cosmos.containers.topicmap": "iot"
  • F. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"


Answer : CDF

Explanation:
C: Avro is binary format, while JSON is text.
F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.
D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector.
Extract:
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"key.converter": "org.apache.kafka.connect.json.AvroConverter"
"connect.cosmos.containers.topicmap": "hotels#kafka"
Incorrect Answers:
B: JSON is plain text.
Note, full example:
{
"name": "cosmosdb-sink-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"tasks.max": "1",
"topics": [
"hotels"
],
"value.converter": "org.apache.kafka.connect.json.AvroConverter",
"value.converter.schemas.enable": "false",
"key.converter": "org.apache.kafka.connect.json.AvroConverter",
"key.converter.schemas.enable": "false",
"connect.cosmos.connection.endpoint": "https://<cosmosinstance-name>.documents.azure.com:443/",
"connect.cosmos.master.key": "<cosmosdbprimarykey>",
"connect.cosmos.databasename": "kafkaconnect",
"connect.cosmos.containers.topicmap": "hotels#kafka"
}
}
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/

You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?

  • A. Throughput
  • B. Write throughput budget
  • C. Batch size
  • D. Collection action


Answer : C

Explanation:
Batch size: An integer that represents how many objects are being written to Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload.
Incorrect Answers:
A: Throughput: Set an optional value for the number of RUs you'd like to apply to your CosmosDB collection for each execution of this data flow. Minimum is 400.
B: Write throughput budget: An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.
D: Collection action: Determines whether to recreate the destination collection prior to writing.
None: No action will be done to the collection.
Recreate: The collection will get dropped and recreated
Reference:
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control (RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?

  • A. CosmosDB Operator only
  • B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
  • C. DocumentDB Account Contributor only
  • D. Cosmos DB Built-in Data Contributor only


Answer : A

Explanation:
Cosmos DB Operator: Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use Data Explorer.
Incorrect Answers:
B: DocumentDB Account Contributor can manage Azure Cosmos DB accounts. Azure Cosmos DB is formerly known as DocumentDB.
C: DocumentDB Account Contributor: Can manage Azure Cosmos DB accounts.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control

You have an Azure Cosmos DB Core (SQL) API account.
You configure the diagnostic settings to send all log information to a Log Analytics workspace.
You need to identify when the provisioned request units per second (RU/s) for resources within the account were modified.
You write the following query.

AzureDiagnostics -
| where Category == "ControlPlaneRequests"
What should you include in the query?

  • A. | where OperationName startswith "AccountUpdateStart"
  • B. | where OperationName startswith "SqlContainersDelete"
  • C. | where OperationName startswith "MongoCollectionsThroughputUpdate"
  • D. | where OperationName startswith "SqlContainersThroughputUpdate"


Answer : A

Explanation:
The following are the operation names in diagnostic logs for different operations:
RegionAddStart, RegionAddComplete
RegionRemoveStart, RegionRemoveComplete
AccountDeleteStart, AccountDeleteComplete
RegionFailoverStart, RegionFailoverComplete
AccountCreateStart, AccountCreateComplete
*AccountUpdateStart*, AccountUpdateComplete
VirtualNetworkDeleteStart, VirtualNetworkDeleteComplete
DiagnosticLogUpdateStart, DiagnosticLogUpdateComplete
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

Page:    1 / 4   
Total 56 questions