Designing Database Solutions for Microsoft SQL Server v12.0 (70-465)

Page:    1 / 8   
Total 111 questions

You have a server named Server1 that has 16 processors.
You plan to deploy multiple instances of SQL Server 2014 to Server1.
You need to recommend a method to allocate processors to each instance.
What should you include in the recommendation?
More than one answer choice may achieve the goal. Select the BEST answer.

  • A. Processor affinity
  • B. Windows System Resource Manager (WSRM)
  • C. Max Degree of Parallelism
  • D. Resource Governor


Answer : A

Explanation:
CPU affinity management through Windows System Resource Manager is not recommended for SQL Server multi-instance management. Instead, use the processor affinity settings in SQL Server.
Reference: Server Properties (Processors Page)

You are the senior database administrator at Contoso, Ltd. You manage a SQL Server
2014 Instance, with multiple databases used for reporting.
You have recently hired a junior database administrator. You want this person to be able to view the database structures on the server, but you do not want him or her to be able to make changes or see the data in the tables.
The new hire's login credentials are as follows:

Login name: JFree -

Password: Jx672$qse -
You want the new hire to be required to change his password on his next login.
The code that is produced should execute no matter the initial database context in which it is started.
You need to write the code required to give the new hire only the desired access, using the smallest number of steps. Develop the solution by selecting and arranging the required code blocks in the correct order. You may not need all of the code blocks.




Answer :

Explanation:
Box 1:


Box 2:

Note:
* MUST_CHANGE
Applies to: SQL Server 2008 through SQL Server 2014.
Applies to SQL Server logins only. If this option is included, SQL Server prompts the user for a new password the first time the new login is used.
* The VIEW DEFINITION permission lets a user see the metadata of the securable on which the permission is granted. However, VIEW DEFINITION permission does not confer access to the securable itself. For example, a user that is granted only VIEW DEFINITION permission on a table can see metadata related to the table in the sys.objects catalog view.
However, without additional permissions such as SELECT or CONTROL, the user cannot read data from the table.
References:
CREATE LOGIN (Transact-SQL)
ALTER SERVER ROLE (Transact-SQL)

You use SQL Server 2014. You create a table within a database by using the following
DDL:


The following table illustrates a representative sample of data:

The system is expected to handle 50 million orders a month over the next five years.
You have been instructed by your Team Lead to follow best practices for storage and performance in the utilization of SPARSE columns.
Which columns should you designate as SPARSE? To answer, mark each column as
SPARSE or NOT SPARSE in the answer area.



Answer :

Explanation:


Note:
Sparse columns are ordinary columns that have an optimized storage for null values.
Sparse columns reduce the space requirements for null values at the cost of more overhead to retrieve nonnull values. Consider using sparse columns when the space saved is at least 20 percent to 40 percent.

You have an SQL Server 2014 server.
You plan to create four stored procedures that will use transactions. The stored procedures will be configured as shown in the following table.


You need to recommend an isolation level for each stored procedure. The solution must support the concurrency strategy of each stored procedure and must minimize locks.
What should you recommend? To answer, drag the appropriate isolation levels to the correct stored procedures. Each isolation level may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.



Answer :

Explanation:
* SP1 SNAPSHOT
The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction.
* SP2: REPEATABLE READ
Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes.
* SP3: READ COMMITTED
Specifies that statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads.
* SP4: SERIALIZABLE
SERIALIZABLE specification include:
/ No other transactions can modify data that has been read by the current transaction until the current transaction completes.

You deploy a database by using SQL Server 2014. The database contains a table named
Table1.
You need to recommend a solution to track all of the deletions executed on Table1. The solution must minimize the amount of custom code required.
What should you recommend?

  • A. Change data capture
  • B. Statistics
  • C. A trigger
  • D. Master Data Services


Answer : A

Explanation:
Change data capture is designed to capture insert, update, and delete activity applied to
SQL Server tables, and to make the details of the changes available in an easily consumed relational format. The change tables used by change data capture contain columns that mirror the column structure of a tracked source table, along with the metadata needed to understand the changes that have occurred.
Reference: About Change Data Capture (SQL Server)

You are planning to deploy a database to Windows Azure SQL Database.
You need to design a stored procedure to update rows. The stored procedure must meet the following requirements:
-> If the update fails, an error must be raised to the application and the update must be discarded.
-> The stored procedure must be designed to maximize concurrency.
What should you include in your design?
To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.




Answer :

Explanation:
Box 1: Begin an explicit transaction.
Box 2: Perform an update in a try block.
Box 3: Read the @@ROWCOUNT system variable.
Box 4: Raise an error and roll back the transaction if the row count is less than 1.
Box 5: Commit the transaction in a finally block.
Note:
* Read Committed is SQL Server's default isolation level.
* @@ROWCOUNT
eturns the number of rows affected by the last statement.
* Using TRYCATCH in a transaction
The following example shows how a TRYCATCH block works inside a transaction. The statement inside the TRY block generates a constraint violation error.
BEGIN TRANSACTION;

BEGIN TRY -
-- Generate a constraint violation error.

DELETE FROM Production.Product -
WHERE ProductID = 980;

END TRY -

BEGIN CATCH -

SELECT -

ERROR_NUMBER() AS ErrorNumber -
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;

IF @@TRANCOUNT > 0 -
ROLLBACK TRANSACTION;
END CATCH;

IF @@TRANCOUNT > 0 -
COMMIT TRANSACTION;

GO -

You plan to deploy SQL Server 2012.
You are designing two stored procedures named USP_1 and USP_2 that have the following requirements:
-> Prevent data read by USP_1 from being modified by other active processes.
-> Prevent USP_2 from performing dirty reads.
You need to recommend the isolation level for each stored procedure. The solution must maximize concurrency.
Which isolation levels should you recommend?
To answer, drag the appropriate isolation level to the correct stored procedure in the answer area.




Answer :

Explanation:


Note:
* REPEATABLE READ
This isolation level includes the guarantees given by SNAPSHOT isolation level. In addition, REPEATABLE READ guarantees that for any row that is read by the transaction, at the time the transaction commits the row has not been changed by any other transaction. Every read operation in the transaction is repeatable up to the end of the transaction.
* Committed Read is SQL Server's default isolation level. It ensures that an operation will never read data another application has changed but not yet committed.

You are the new database administrator for a SQL Server 2014 instance.
You conduct an assessment on the instance and determine that the auto create statistics setting on the database named DB1 has been turned off. You see no evidence that any maintenance has been occurring.
You want to set up monitoring to see if query performance is being affected.
You need to set up a monitoring process that will capture any cases where statistics could have been useful if they existed.
What should you do?

  • A. Create a SQL Server Agent job to execute DBCC SHOWSTATISTICS on each of the primary key columns in the database.
  • B. Use the missing_column_statistics extended event.
  • C. Query the sys.statistics system view to see all cases where the statistics were last needed.
  • D. Write a query using the sys.dm_db_missing_index_group_stats DMV Joining to sys.indexes, filtering on is_hypothetical.


Answer : B

Explanation:
The Missing Column Statistics event class indicates that column statistics that could have been useful for the optimizer are not available.
By monitoring the Missing Column Statistics event class, you can determine if there are statistics missing for a column used by a query. This can cause the optimizer to choose a less efficient query plan than expected.
Reference: Missing Column Statistics Event Class

You are creating a database that will store usernames and credit card numbers for an application.
You need to recommend a solution to store and reuse the credit card numbers in the database.
What should you recommend? More than one answer choice may achieve the goal. Select the BEST answer.

  • A. Data encryption
  • B. Transparent Data Encryption (TDE)
  • C. Encrypting File System (EPS)
  • D. Data hashing


Answer : B

Explanation:
If we are going to encrypt credit card number for storage, then we should have Data
Encryption Key (DEK) for encrypting the credit card number.
Reference: http://msdn.microsoft.com/en-us/library/bb934049.aspx

You are the administrator for a SQL Server 2014 instance that stores the data for an online transaction processing sales system.
The company takes full backups every week; differential backups on the days with no full backups; and hourly transaction backups. These backups are stored on a backup server in the company's data center.
Every week, the company places the full backup on a tape and sends it to a third-party backup storage system.
The company is worried that a disaster might occur that could destroy their computer center and cause them to lose orders.
You need to determine the best method for providing the smallest amount of data loss and downtime without leasing or purchasing additional physical locations.
What should you do? More than one answer choice may achieve the goal. Select the BEST answer.

  • A. Set up SQL Server Always On with a SQL Azure database as a replica.
  • B. Set up SQL Server Always On by using a SQL Server on a Windows Azure Virtual Machine.
  • C. Put the differential backup on tape and send it to the third-party backup storage system.
  • D. Use the Microsoft SQL Server Backup to Microsoft Windows Azure Tool to direct all backups to a different geographical location.


Answer : D

Explanation:
SQL Server 2012 was the first version to provide the ability to back up databases to the
Cloud, and SQL Server 2014 improves on the process.
Microsoft SQL Server Backup to Windows Azure Tool enables backup to Windows Azure
Blob Storage and encrypts and compresses SQL Server backups stored locally or in the cloud.
Reference:
Smart, Secure, Cost-Effective: SQL Server Back Up to Windows Azure - SQL Server Team
Blog - Site Home - TechNet Blogs

You create a stored procedure that retrieves all of the rows from a table named Table1.
You need to recommend a solution to ensure that all of the statements in the stored procedure can be executed if another transaction is modifying rows in Table1 simultaneously.
What should you recommend?

  • A. Snapshot isolation
  • B. A database snapshot
  • C. Filegroups
  • D. Indexes


Answer : A

Explanation:
Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction.
Reference: Transaction Isolation Levels

You plan to deploy a database to SQL Azure.
You must create two tables named Table 1 and Table 2 that will have the following specifications:
-> Table1 will contain a date column named Column1 that will contain a null value approximately 80 percent of the time.
-> Table2 will contain a column named Column2 which is the product of two other columns in Table2.
Both Table1 and Table2 will contain over one million rows.
You need to recommend which options must be defined for the columns. The solution must minimize the time it takes to retrieve data from the tables.
Which options should you recommend?
To answer, drag the appropriate options to the correct column in the answer area.




Answer :

Explanation:


* Sparse columns are ordinary columns that have an optimized storage for null values.
Sparse columns reduce the space requirements for null values at the cost of more overhead to retrieve nonnull values. Consider using sparse columns when the space saved is at least 20 percent to 40 percent.
* A Persisted column would be faster to retrieve.
* A computed column is computed from an expression that can use other columns in the same table. The expression can be a noncomputed column name, constant, function, and any combination of these connected by one or more operators.
Unless otherwise specified, computed columns are virtual columns that are not physically stored in the table. Their values are recalculated every time they are referenced in a query.
The Database Engine uses the PERSISTED keyword in the CREATE TABLE and ALTER
TABLE statements to physically store computed columns in the table. Their values are updated when any columns that are part of their calculation change.

References: Use Sparse Columns -
Specify Computed Columns in a Table

You need to recommend the actions that are required to partition a table.
In which order should the four actions be performed?
To answer, move the actions from the list of actions to the answer area and arrange them in the correct order.




Answer :

Explanation:


Create a partitioned table using the AdventureWorks2012 database:
CREATE PARTITION FUNCTION TransactionRangePF1 (DATETIME)

AS RANGE RIGHT FOR VALUES -
(
'20071001', '20071101', '20071201', '20080101',
'20080201', '20080301', '20080401', '20080501',
'20080601', '20080701', '20080801'
);

GO -
CREATE PARTITION SCHEME TransactionsPS1 AS PARTITION TransactionRangePF1

TO -
(
[PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY],
[PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY],
[PRIMARY], [PRIMARY], [PRIMARY]
);

GO -
CREATE TABLE dbo.TransactionHistory
(
TransactionID INT NOT NULL, -- not bothering with IDENTITY here
ProductID INT NOT NULL,
ReferenceOrderID INT NOT NULL,
ReferenceOrderLineID INT NOT NULL DEFAULT (0),
TransactionDate DATETIME NOT NULL DEFAULT (GETDATE()),
TransactionType NCHAR(1) NOT NULL,
Quantity INT NOT NULL,
ActualCost MONEY NOT NULL,
ModifiedDate DATETIME NOT NULL DEFAULT (GETDATE()),

CONSTRAINT CK_TransactionType -
CHECK (UPPER(TransactionType) IN (N'W', N'S', N'P'))
)
ON TransactionsPS1 (TransactionDate);

GO -

You have a database hosted on SQL Server 2012 R2. The database contains 5 million rows.
You need to recommend a repeatable method to migrate the database to SQL Azure.
Which method should you recommend? More than one answer choice may achieve the goal. Select the BEST answer.

  • A. Create a SQL Server Integration Services (SSIS) package, and then run the package.
  • B. Back up the database, and then restore the database.
  • C. Extract a data-tier application, and then import the application.
  • D. Generate scripts to create all of the all database objects and all of the data, and then execute the scripts by using SQL Azure.


Answer : A

Explanation:

SQL Server Integration Services -

Most flexibility -

Data Transfer Efficiency: Good -
/ SSIS can be used to perform a broad range of data migration tasks. SSIS provides support for complex workflow and data transformation between the source and destination.
It is a good choice to transfer of data for databases that require many changes to work on
Microsoft Azure SQL Database. You can use SSIS data transfer packages with another mechanism for transferring the database schema, such as a Data-tier Application package.
SSIS for Azure and Hybrid Data Movement
Incorrect:

Not D: Generate Scripts Wizard -
Has explicit option for Azure SQL Database scripts generation

Data Transfer Efficiency: Poor -

Good for smaller database -
/ Using the Generate Scripts wizard to migrate a SQL Server database to Azure SQL
Database should be limited to:
Teams who have experience with the wizard.
Migrating simple databases that need few schema changes to run on Azure SQL
Database. The scripts generated from the source database can be modified before being used to create the new version of the database on Azure SQL Database, but using a database project in the SQL Server Data Tools has richer support for making schema changes.
Migrating small databases that do not have much data. The wizard generates scripts that use insert statements instead of bulk copies to transfer the data. The insert statements can be throttled when the tables contain too much data, and are not as fast as bulk copies.

You have a server named SQL1 that has SQL Server 2014 installed. SQL1 hosts a database named Database! Database1 contains a table named Table1. Table1 is partitioned across five filegroups based on the TransactionType field.
The schema of Table1 is configured as shown in the following table.


Table1 contains the indexes shown in the following table.

You need to recommend an index strategy to maximize performance for the queries that consume the indexes available to Table1.
Which type of index storage should you recommend?
To answer, drag the appropriate index storage type to the correct index in the answer area.



Answer :

Explanation:


Note:

Index Storage Type -
* Designing a partitioned index independently (unaligned) of the base table can be useful in the following cases:
/The base table has not been partitioned.
/The index key is unique and it does not contain the partitioning column of the table.
/ You want the base table to participate in collocated joins with more tables using different join columns.

Partitioned Tables and Indexes -

Page:    1 / 8   
Total 111 questions