What is Azure Cosmos DB?

 

What is Azure Cosmos DB and why should you get excited about it?

Cosmos DB started as an internal project – “Project Florence” – at Microsoft in 2010. It is a globally distributed, highly scalable and performant multi-model database service built on Azure. It was built ground up with horizontal scalability and global distribution in view, so you can expect seamless geo-replication to Azure regions by clicking on a map and performance to the order of milliseconds.

CosmosDBIcon

Why should you use Cosmos DB?

Cosmos DB is the future of storage. Below are some reasons you may want to use it.

  • You use Microsoft stack already and find your favourite RDBMS – SQL Server – too slow
  • Your applications need to process large amounts of data at blazing speeds
  • Your data is not relational anymore, you want to use JSON document store or Graph DB or one of the many popular APIs Cosmos DB supports
  • You are having to “over-provision” specs to cover peak loads on the database server
  • Your business spans multiple continents and you want easy and reliable replication
  • You trust Microsoft to do a better job with availability using Azure

CosmosDBGeoReplication

Hello Collections!

Collections are to Cosmos DB what tables are to an RDBMS. This analogy is for easy understanding; don’t get hung up on details. There are many differences between collections and tables.

After you create a Database in your Cosmos DB account, you can create your first collection. Below are a few aspects of a collection you need to be familiar with.

1Request Units / Throughput: Request Unit (RU) is the currency of Cosmos DB.

1 RU = capacity to read* a single 1 KB item consisting of 10 unique property values
*by self-link or id

A collection need not be a set of documents of the same schema. As Cosmos DB is schema-agnostic, you can very well store different document types in the same collection. That said, you may want to store document types that need the same amount of scalability and storage in the same collection. Cosmos DB is cost effective for high volume transaction systems because RU/s for a collection can be ramped up at peak hours and reduced to a minimum during hours of low trade.

When the provisioned number of RU/s is used up, client requests may experience throttling. Prolonged throttling indicates an insufficient throughput setting.

CosmosDBThroughput

2. Partition: Each document in a collection has a partition key and an id

Partition key helps to identify the logical partition, and the id field helps to identify a specific document.

When you create a collection with an initial throughput of 1000 RU/s or more, the resulting collection has unlimited scalability. Depending on the initial throughput assigned, Cosmos DB computes the number of partitions required to support the provisioned throughput. Each of these physical partitions have a storage limit of 10GB. Due to this, a single partition (housing documents having a certain partition key, such as Product Type = ‘Motorbike helmet’) should not amount to more than 10GB. The number of partitions in a collection can be infinite, theoretically. Therefore, a good partition key must be a field having a suitable cardinality. With a bad partition key, you are bound to have uneven distributions of data and are likely to hit the 10 GB storage limit per partition.

CosmosDBPartitions3. Time to Live: TTL can be set at a collection or document level. Document level TTL takes precedence, obviously.

Simply put, if TTL for a document is set to 3600 seconds, the document expires when it is older than 3600 seconds. System field _ts (_ts or timestamp is set to Epoch time) indicates the last modified time of a document. Be careful when you set this at collection level. However, this feature helps to implement a sound purge policy.

Global Distribution:

Geo-replication becomes as simple as clicking on a graph with Cosmos DB’s global distribution feature. With more than 50 Azure regions, you could not only set multiple Read regions for global customers, but also specify several failover regions to help with disaster recovery.

CosmosDBAutomaticFailover

There is a lot more to Cosmos DB than can be covered in a single post. With a whole heap of new Azure features that it can easily integrate with, such as SQL Managed Instance, SQL Data warehouse, HDInsight, Data bricks and Data lakes, Cosmos DB is here to stay.

Check out my post about Cosmos DB migration tool if you are migrating data to or from Cosmos DB.

Advertisements

Export data from Cosmos DB to SQL Server

Cosmos DB is Microsoft’s latest NoSQL database offering low latency, high scalability and geo-distribution to any Azure region. Read more about Cosmos DB here. In its infancy, Cosmos DB was known as Document DB. Renaming was inevitable as Document DB evolved beyond just a document store.

Following is a simple note on how to export data from Cosmos DB to SQL Server. At times, you would want a dump of data from a Cosmos DB collection in SQL Server for analysis. This is going to be handy for data integrity checks, troubleshooting a production issue or to do analysis and derive insights.

There are a few ways to export data from Cosmos DB. The quickest option is to use Document DB / Cosmos DB Migration Tool. This is a tool provided by Microsoft  to migrate data from various sources such as MongoDB, JSON, csv and SQL Server to Cosmos DB. The tool also helps to export data out of Cosmos DB, which is what we will be doing in this case.

1. Use Azure Cosmos DB Migration tool to export data to json files:

  • Install the tool and launch as Administrator (use an Azure VM for best performance). Please be mindful of spike in RU costs when exporting data from your collection. To avoid throttling, scale your collection up as required just before you do the export or export in off-peak hours, if any.  Leave a comment if you need any pointer on this.LaunchCosmosDBMigrationTool
  • Choose source – DocumentDB (CosmosDB aka DocumentDB)

CosmosDBMigrationTool-Source

  • Specify connection string. You can find endpoint and key from Keys section of your CosmosDB account in Azure portal. Please note that CosmosDB is case-sensitive.

CosmosDBMigrationTool-ConnectionString

  • A peek into Advanced options
    • Include internal fields – for each document, Cosmos DB maintains a set of auto-generated internal/system fields such as _ts and _self. Their names start with an underscore, making it easy to differentiate them from user fields. This option lets you include internal fields in the output. This is handy, especially the _ts field, which indicates when the document was last updated.
    • Number of retries on failure & Retry interval – Set a reasonable number of retries. In this case, I have used the value 1.
    • Connection Mode you want to use Gateway to get best performance and to bypass firewall rules.

CosmosDBMigrationTool-AdvancedOptions

  • Specify target information. In this case, we want to export to a json document. We could either output to a local file or a blob.

CosmosDBMigrationTool-TargetInformation

  • Error Logging. Set these options to enable error logging.

CosmosDBMigrationTool-ErrorLogging

Hit Import and if there are no errors, you will soon have a new json file with all the data from your collection. If you want only a subset of data, all you need to do is modify your source query.

2. Import json files to SQL Server:

SQL Server 2016 introduced a JSON parse function called OPENJSON. If none of your user databases are upgraded to 2016 yet, but you have a 2016 engine, context-switch to a system database to use OPENJSON.

DECLARE @productNutrition varchar(max);

--Read from the json file using openrowset
SELECT @productNutrition = BulkColumn
FROM OPENROWSET(BULK'C:\Users\Smruthi\Downloads\Arjun\productnutrition_20180427.json', SINGLE_BLOB) JSON;

--Pass the variable containing json as parameter to OPENJSON function
SELECT *
FROM OPENJSON (@productNutrition)
WITH
(
ProductID varchar(20) '$.id',
ProductDescription varchar(100) '$.description',
ProductGroup varchar(200) '$.foodGroup',
ServingAmount float '$.servings[1].amount',
ServingUnit varchar(10) '$.servings[1].description',
nutrients nvarchar(max) as json --note that json is case sensitive
)

Give it a go yourself. If you have any question, leave a comment and I will be happy to assist.

jsonfileSample

openjson Example

Live Query Statistics – SQL Server 2016

SQL Server Management Studio (SSMS) 2016 has a handy new feature to View Live Query Statistics. This feature also works when connected to a 2014 instance using SSMS 2016.

Why use it?

Previously, similar stats were available by doing so:

SET STATISTICS "Detail" ON;

With the new feature, the main difference is that as the query progresses, stats are refreshed, thus providing a live coverage of what SQL Server is up to. This is a fantastic feature to learn about query processing and execution plans, and it helps to identify performance problems. Only yesterday, I used it to identify a “bad plan” where SQL was doing an Eager Spool by cross-joining two million record tables, and thus filling tempdb.

How to use it?

To use the feature, simply click on “Include Live Query Statistics” button in the toolbar or under menu Query -> Include Live Query Statistics.
IncludeLiveQueryStatistics-SQLServer2016

Watch the magic unfold when you execute query.

LiveQueryStatistics-Example.png

 

If you are not using SQL Sentry Plan Explorer already, please install PRO version for free from Sentry One. I find it amazing and it’s probably the best free tool out there to do performance analysis. Install the SSMS add-in, and you can right click on plan and view plan in SQL Sentry Plan Explorer. It becomes incredibly useful for complex plans that are cumbersome to navigate in SQL Server. Not to mention the host of other info that’s presented nicely to make performance tuning enjoyable.

SQLSentryPlanExplorer-AddIn

SQLSentryPlanExplorer.jpg

Watch out for that..

Live Query Statistics is an interesting feature, but I had a nervous few moments when my long-running ETL stored procedure began to produce a plan that was way too big for SSMS to handle. I had to eventually cancel execution and view plan from DMV. So, while it is a nice feature, watch out for a few gotchas. Microsoft also advises that there might be a mild performance dip in execution of your query when you are using this feature.

Who can use it?

You need to have SHOWPLAN and VIEW SERVER STATE permissions to be able to use this feature effectively.

Overall, this is a great feature that makes information available at finger tips when you are possibly scrambling to address that performance issue that is driving everyone nuts!

Database Mail shutting down. Mail queue has been stopped.

DB mail in our production servers started acting up last week. We have SQL Server 2012 SP1. There were a couple of sets of symptoms. Following is what I did to get DB mail working again.

I hope you don’t spend 4 hrs like me to find a solution.

Symptom 1:

Mails will not be sent anymore. Sending test mail will not work.  When DB mail status is checked using the following SP, it will return “STARTED”.

USE msdb
GO
EXEC dbo.sysmail_help_status_sp

However, checking the status of mail items themselves using the following SP will show that they are “unsent”.

SELECT TOP 10 sent_status, *
FROM dbo.sysmail_allitems WITH(NOLOCK READUNCOMMITTED)
ORDER BY send_request_date desc

Stopping DB mail using the following SP does not work anymore. The command takes forever to execute and nothing happens.

USE msdb
GO
EXEC dbo.sysmail_stop_sp

If these are the symptoms,

  1. Kill the process “DatabaseMail.exe” from task manager of the server to stop DB mail.
  2. Check status using sysmail_help_status_sp and ensure it shows “STOPPED”.
  3. Start DB mail using the following command.
    USE msdb
    GO
    EXEC dbo.sysmail_start_sp
  4. Check status of mail items. The sent_status column should now show “sent”.

Symptom 2:

Fix for “Symptom 1” worked on one server, but on another, mails were not being sent at all. If the above fix doesn’t do it for you, Stop DB mail and check mail log in SSMS at

Management->Database Mail->View Database Mail Log

If log shows the message – “DatabaseMail process is shutting down. Mail queue has been stopped”, check for mails stuck in the mail queue.

Use msdb
GO
Select count(*) from ExternalMailQueue – Check if this is non-zero
  1. If the above query returns a non-zero number, there are stuck mails. Use following script to remove all items from the queue. Please be informed that all these MAILS WILL BE LOST!
Use msdb
GO
ALTER QUEUE ExternalMailQueue WITH STATUS = ON
set nocount on
declare @Conversation_handle uniqueidentifier;
declare @message_type nvarchar(256);
declare @counter bigint;
declare @counter2 bigint;
set @counter = (select count(*) from ExternalMailQueue)
set @counter2=0
while (@counter2<=@counter)
begin
receive @Conversation_handle = conversation_handle, @message_type = message_type_name from ExternalMailQueue
set @counter2 = @counter2 + 1
end

2. Check number of mails in the mail queue. It will be zero.

Use msdb
GO
Select count(*) from ExternalMailQueue

3. Start DB mail using

USE msdb
GO
EXEC dbo.sysmail_start_sp

 4. Check status of mail items. The sent_status column should now show “sent”.

XQuery

In 2005 version of SQL Server, XQuerying was introduced. It’s a powerful feature and when used together with a CTE, it helps to keep code simple and clean. I use XML extensively in my code and many a time, I find myself looking up code, searching the internet or trying things out myself. I hope this post will be a single point of reference to most of the usual XQuery coding.

Following is a simple script to read from an XML variable. Same code can be used to read from an XML column as well.

NOTE: I will append more XQuery code samples to this post.

DECLARE @xmlProduct XML = 
'<Catalog>
	<Product>
		<Name>Selle Italia Road</Name>
		<Code>SIR-1</Code>
		<Category>Saddle</Category>
		<Description>Sleek saddle from Selle Italia</Description>
		<UnitPrice>50</UnitPrice>
		<Currency>USD</Currency>
	</Product>
	<Product>
		<Name>Brooks Tourer</Name>
		<Code>BT-T</Code>
		<Category>Saddle</Category>
		<Description>Brooks leather saddle for touring</Description>
		<UnitPrice>120</UnitPrice>
		<Currency>USD</Currency>
	</Product>
</Catalog>
'
--Read from nodes
SELECT
c.value('Name[1]','varchar(50)') [ProductName],
c.value('Code[1]','varchar(50)') [Code],
c.value('Category[1]','varchar(50)') [Category],
c.value('UnitPrice[1]','varchar(50)') + CHAR(32) + c.value('Currency[1]','varchar(50)') [Price],
c.value('Description[1]','varchar(50)') [Description]
FROM 
@xmlProduct.nodes('//Catalog/Product') AS TAB(c)

Output:
XQueryOutput

Retrieve unsaved query

I was working on a remote machine and while I was away, the machine was restarted. Lost the scripts I had not saved, or so I thought; and then I came up with this – was a life saver!

SELECT TOP 10 
EST.text
FROM sys.dm_exec_query_stats EQS
CROSS APPLY sys.dm_exec_sql_text(eqs.sql_handle) EST
WHERE EST.text LIKE '%text%'
ORDER BY last_execution_time DESC

Find tables which are most commonly used in Stored Procedures

When you start working on a new project, one of the important tasks as a database developer is to get used to the table names. Here’s a script which will help you identify the most commonly used tables (assuming that the most commonly used are the most important).

SELECT TOP 50
t.name, COUNT(1) tcount
FROM INFORMATION_SCHEMA.routines r
INNER JOIN sys.tables t ON r.ROUTINE_NAME LIKE ('%' + t.name + '%')
GROUP BY t.name 
ORDER BY tcount desc