Recertification for MCSE: Data Platform v5.0 (70-469)

Page:    1 / 19   
Total 281 questions

You need to ensure that a new execution plan is used by usp_GetOrdersByProduct each time the stored procedure runs.
What should you do?

  • A. Execute sp_help usp_GetOrdersByProduct\
  • B. Add WITH (FORCESEEK) to line 69 in usp.GetOrdersByProduct.
  • C. Add WITH RECOMPILE to line 64 in usp.GetOrdersByProduct.
  • D. Execute sp_recompile usp.GetOrdersByProduct'.


Answer : B

You need to ensure that a new execution plan is used by usp_GetOrdersByProduct each time the stored procedure runs.
What should you do?

  • A. Execute sp_help 'usp_GetOrdersByProduct'.
  • B. Execute sp_recompile 'usp_GetOrdersByProduct'.
  • C. Add WITH RECOMPILE to line 03 in usp_GetOrdersByProduct.
  • D. Add WITH (FORCESEEK) to line 07 in usp_GetOrdersByProduct.


Answer : C

Explanation:
Ref: http://msdn.microsoft.com/en-us/librAry/ms190439(v=sql.90).aspx

You need to ensure that usp_AddXMLOrder can be used to validate the XML input from the retailers.
Which parameters should you add to usp_AddXMLOrder on line 04 and line 05? (Each correct answer presents part of the solution. Choose all that apply.)

  • A. @schema varbinary(100).
  • B. @items varchar(max).
  • C. @schema sysname.
  • D. @items varbinary(max).
  • E. @items xml.
  • F. @schema xml.


Answer : C,E

You need to implement a solution that meets the site requirements.
What should you implement?

  • A. A non-indexed view on Server1
  • B. A non-indexed view on Server2
  • C. A distributed view on Server1
  • D. A distributed view on Server2


Answer : C

You need to implement a solution that addresses the bulk insert requirements.
What should you add to line 08 in usp_ImportOrderDetails?
A. LASTROW=0.
B. BATCHSIZE=0.
C. BATCHSIZE=1000.
D. LASTROW = 1000.



Answer : C Topic 7, Fourth Coffee Background Corporate Information Fourth Coffee is global restaurant chain. There are more than 5,000 locations worldwide. Physical Locations Currently a server at each location hosts a SQL Server 2012 instance. Each instance contains a database called StoreTransactions that stores all transactions from point of sale and uploads summary batches nightly. Each server belongs to the COFFECORP domain. Local computer accounts access the StoreTransactions database at each store using sysadmin and datareaderwriter roles. Planned changes ? The IT department must consolidate the point of sales database infrastructure. ? The marketing department plans to launch a mobile application for micropayments. ? The finance department wants to deploy an internal tool that will help detect fraud. Initially, the mobile application will allow customers to make micropayments to buy coffee and other items on the company web site. These micropayments may be sent as gifts to other users and redeemed within an hour of ownership transfer. Later versions will generate profiles based on customer activity that will push texts and ads generated by an analytics application. When the consolidation is finished and the mobile application is in production, the micropayments and point of sale transactions will use the same database. Existing Environment Existing Application Environment Some stores have been using several pilot versions of the micropayment application. Each version currently is in a database that is independent from the point of sales systems. Some versions have been used in field tests at local stores, and others are hosted at corporate servers. All pilot versions were developed by using SQL Server 2012. Existing Support Infrastructure The proposed database for consolidating micropayments and transactions is called CoffeeTransactions. The database is hosted on a SQL Server 2014 Enterprise Edition [Microsoft-70-469-5.0/Microsoft-7

You need to redesign the system to meet the scalability requirements of the application.
Develop the solution by selecting and arranging the required code blocks in the correct order.
You may not need all of the code blocks.




Answer :

Explanation: Box 1:


Box 2:

Box 3:

Box 4:

Box 5:

Box 6:

Box 7:

Note:
* MEMORY_OPTIMIZED_DATA
First create a memory-optimized data filegroup and add a container to the filegroup.
Then create a memory-optimized table.
* You must specify a value for the BUCKET_COUNT parameter when you create the memory-optimized table. In most cases the bucket count should be between 1 and 2 times the number of distinct values in the index key.
* Example:
-- create a durable (data will be persisted) memory-optimized table
-- two of the columns are indexed
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH
(BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,

TotalPrice MONEY -
) WITH (MEMORY_OPTIMIZED=ON)

GO -

You need to optimize the index structure that is used by the tables that support the fraud detection services.
What should you do?

  • A. Add a hashed nonclustered index to CreateDate.
  • B. Add a not hash nonclustered index to CreateDate.
  • C. Add a not hash clustered index on POSTransactionId and CreateDate.
  • D. Add a hashed clustered index on POSTransactionId and CreateDate.


Answer : A

Explanation: The fraud detection service will need to meet the following requirement
(among others):
* Detect micropayments that are flagged with a StatusId value that is greater than 3 and that occurred within the last minute.

You need to create the usp.AssignUser stored procedure.
Develop the solution by selecting and arranging the required code blocks in the correct order. You may not need all of the code blocks.




Answer :

Explanation: Box 1:


Box 2:

Box 3:

Box 4:

Box 5:

Box 6:

Box 7:

Note:
* From scenario: The mobile application will need to meet the following requirements:
/Communicate with web services that assign a new user to a micropayment by using a stored procedure named usp_AssignUser.
* Example:
create procedure dbo.OrderInsert(@OrdNo integer, @CustCode nvarchar(5)) with native_compilation, schemabinding, execute as owner as begin atomic with
(transaction isolation level = snapshot,
language = N'English')
declare @OrdDate datetime = getdate();
insert into dbo.Ord (OrdNo, CustCode, OrdDate) values (@OrdNo, @CustCode,
@OrdDate);
end
go
* Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and business logic in the stored procedure.
* READ COMITTED versus REPEATABLE READ
Read committed is an isolation level that guarantees that any data read was committed at the moment is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. IT makes no promise whatsoever that if the transaction re-issues the read, will find the Same data, data is free to change after it was read.
Repeatable read is a higher isolation level, that in addition to the guarantees of the read committed level, it also guarantees that any data read cannot change, if the transaction reads the same data again, it will find the previously read data in place, unchanged, and available to read.
* Both RAISERROR and THROW statements are used to raise an error in Sql Server.
The journey of RAISERROR started from Sql Server 7.0, where as the journey of THROW statement has just began with Sql Server 2012. obviously, Microsoft suggesting us to start using THROW statement instead of RAISERROR. THROW statement seems to be simple and easy to use than RAISERROR.
* Explicit transactions. The user starts the transaction through an explicit BEGIN TRAN or
BEGIN ATOMIC. The transaction is completed following the corresponding COMMIT and
ROLLBACK or END (in the case of an atomic block).

You need to design the UserActivity table.
Which three steps should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.




Answer :

Explanation: Box 1:


Box 2:

Box 3:

Note:
* Creating a partitioned table or index typically happens in four parts:
-> Create a filegroup or filegroups and corresponding files that will hold the partitions specified by the partition scheme.
-> Create a partition function that maps the rows of a table or index into partitions based on the values of a specified column.
-> Create a partition scheme that maps the partitions of a partitioned table or index to the new filegroups.
-> Create or modify a table or index and specify the partition scheme as the storage location.
* Reorganizing an index uses minimal system resources.
* From scenario:
/ The index maintenance strategy for the UserActivity table must provide the optimal structure for both maintainability and query performance.
/ The CoffeeAnalytics database will combine imports of the POSTransaction and
MobileLocation tables to create a UserActivity table for reports on the trends in activity.
Queries against the UserActivity table will include aggregated calculations on all columns that are not used in filters or groupings.
/ When the daily maintenance finishes, micropayments that are one week old must be available for queries in UserActivity table but will be queried most frequently within their first week and will require support for in-memory queries for data within first week.
The maintenance of the UserActivity table must allow frequent maintenance on the day's most recent activities with minimal impact on the use of disk space and the resources available to queries. The processes that add data to the UserActivity table must be able to update data from any time period, even while maintenance is running.
* Columnstore indexes work well for mostly read-only queries that perform analysis on large data sets. Often, these are queries for data warehousing workloads. Columnstore indexes give high performance gains for queries that use full table scans, and are not well- suited for queries that seek into the data, searching for a particular value.

You need to monitor the health of your tables and indexes in order to implement the required index maintenance strategy.
What should you do?

  • A. Query system DMVs to monitor avg_chain_length and max_chain_length. Create alerts to notify you when these values converge.
  • B. Create a SQL Agent alert when the File Table: Avg time per file I/O request value is increasing.
  • C. Query system DMVs to monitor total_bucket_count. Create alerts to notify you when this value increases.
  • D. Query system DMVs to monitor total_bucket_count. Create alerts to notify you when this value decreases.


Answer : A

Explanation: From scenario:
* You need to anticipate when POSTransaction table will need index maintenance.
* The index maintenance strategy for the UserActivity table must provide the optimal structure for both maintainability and query performance.

You need to modify the stored procedure usp_LookupConcurrentUsers.
What should you do?

  • A. Add a clustered index to the summary table.
  • B. Add a nonclustered index to the summary table.
  • C. Add a clustered columnstore index to the summary table.
  • D. Use a table variable instead of the summary table.


Answer : A

Explanation: Scenario: Query the current open micropayments for users who own multiple micropayments by using a stored procedure named usp.LookupConcurrentUsers

You need to implement a new version of usp_AddMobileLocation. Develop the solution by selecting and arranging the required code blocks in the correct order. You may not need all of the code blocks.




Answer :

Explanation: Box 1:


Box 2:

Box 3:

Box 4:

Box 5:

Box 6:

Note:
* From scenario:
The mobile application will need to meet the following requirements:
Update the location of the user by using a stored procedure named usp_AddMobileLocation.
* DELAYED_DURABILITY
SQL Server transaction commits can be either fully durable, the SQL Server default, or delayed durable (also known as lazy commit).
Fully durable transaction commits are synchronous and report a commit as successful and return control to the client only after the log records for the transaction are written to disk.
Delayed durable transaction commits are asynchronous and report a commit as successful before the log records for the transaction are written to disk. Writing the transaction log entries to disk is required for a transaction to be durable. Delayed durable transactions become durable when the transaction log entries are flushed to disk.

You need to optimize the index and table structures for POSTransaction.
Which task should you use with each maintenance step? To answer, drag the appropriate tasks to the correct maintenance steps. Each task may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.




Answer :

Explanation:


C:\Users\Kamran\Desktop\image.jpg

You need to implement security for the restore and audit process. What should you do?

  • A. Grant the COFFECORP\Auditors group ALTER ANY CONNECTION and SELECT ALL USER SECURABLES permissions. Grant the COFFECORP\StoreAgent group ALTER ANY CONNECTION and IMPERSONATE ANY LOGIN permissions.
  • B. Grant the COFFECORP\Auditors group CONNECT ANY DATABASE and IMPERSONATE ANY LOGIN permissions. Grant the COFFECORP\StoreAgent group CONNECT ANY DATABASE and SELECT ALL USER SECURABLES permissions.
  • C. Grant the COFFECORP\Auditors group ALTER ANY CONNECTION and IMPERSONATE ANY LOGIN permissions. Grant the COFFECORP\StoreAgent group ALTER ANY CONNECTION and SELECT ALL USER SECURABLES permissions.
  • D. Grant the COFFECORP\Auditors group CONNECT ANY DATABASE and SELECT ALL USER SECURABLES permissions. Grant the COFFECORP\StoreAgent group CONNECT ANY DATABASE and IMPERSONATE ANY LOGIN permissions.


Answer : A

You need to modify the usp_DetectSuspiciousActivity stored procedure.
Which two actions should you perform? Each correct answer presents part of the solution.
Choose two.


  • A. Option A
  • B. Option B
  • C. Option C
  • D. Option D
  • E. Option E
  • F. Option F


Answer : D,E

Explanation:
Note:
* Move micropayments to dbo.POSException table by using a stored procedure named ups_DetectSuspiciousActivity.

Page:    1 / 19   
Total 281 questions