Category: T-SQL

Create a Date Dimension in Azure SQL Data Warehouse

Most data warehouses and data marts require a date dimension or calendar table. Those of us that have been building data warehouses in SQL Server for a while have collected our favorite scripts to build out a date dimension. For a standard date dimension, I am a fan of Aaron  Bertrand’s script posted on MSSQLTips.com. But the current version (as of Aug 8, 2016) of Azure SQL Data Warehouse doesn’t support computed columns, which are used in Aaron’s script.

I adapted Aaron’s script to work in Azure SQL Data Warehouse and am sharing it with you below, so you don’t have to do the same. I did leave out holidays because I didn’t need them in my calendar table.  To add them back in, just add the column back to the table and use Aaron’s update statements. I also changed the MonthYear field to include a space between the month and year. Otherwise, my script should produce the same results as Aaron’s in a Azure SQL DW friendly way. Notice that I chose to use Round Robin distribution for my date table. Also, tables are now created with clustered columnstore indexes by default in Azure SQL DW.

Instead of the computed columns, I inserted the dates, then updated the other columns in the temporary table. Then I used the Create Table As Select syntax to create my final RPT.Calendar table.

Happy time trending!

Notes and Tips on SQL Server Spatial Data Types

I’ve been working on a project that includes geographical data representing stops on a delivery route. I’ve just completed loading this data into a data mart. The source data contains longitude and latitude in millionths of a degree with 9 digits of data. We haven’t decided what tool we will use to visualize this data yet, but we know Power View and Power Map both accept latitude and longitude values. I decided to store my longitude and latitude data in decimal (9,6) fields. There is a good possibility that we may be computing distances between points in the future, so I thought it would be good to store the data as a spatial data type as well. I thought I would share a few things that I learned along the way.

There are two spatial data types in SQL Server: geometry and geography. Geometry represents the flat-earth system where units are all equally spaced apart. Geography represents the round-earth system measured in latitude and longitude. Since I had longitude and latitude in my data, I used the geography data type. The geography spatial data type is implemented as a .NET common language runtime (CLR) data type in SQL Server.

I populated my table using a query of which I’ve included a snippet below. You can see the use of the Point function to create my geography values.

SELECT 
     ...
     [Latitude] = [Latitude] / 1000000.0
    ,[Longitude] = [Longitude] / 1000000.0 
    ,[GeographyPoint] = geography::Point([Latitude]/1000000.0, [Longitude]/1000000.0, 4326)
FROM [MySourceTable]

The Point function accepts a a latitude, longitude, and SRID, and returns a geography value. An SRID is a unique identifier associated with a coordinate system, tolerance, and resolution. SRIDs are not specific to SQL Server. They are maintained by the International Association of Oil & Gas Producers (OGP) Surveying & Positioning Committee. Here’s a blog post that I think does a good job explaining many of the terms associated with spatial data in SQL Server.

Tip #1: You can see a list of SRIDs available in SQL Server by running the following query. SQL Server uses the default SRID of 4326, which is the WGS 84 spatial reference system.

SELECT * FROM sys.spatial_reference_systems

My source database has planned delivery stops and times and actual delivery stops and times stored in separate columns in a very wide table. I decided to pivot that data and create a table with a scenario key that refers to either plan or actual data. To do this, I wrote 2 queries and attempted to union them together to produce my final data set. That’s when I learned:

geography union error

Tip #2: When SQL Server performs a UNION it must compare values to remove duplicate rows. CLR user-defined type columns like geography are not comparable. As long as there is no risk of duplicate data between the two sets, you can use UNION ALL.

The query below works just fine as long as you use UNION ALL.

SELECT 
     [Latitude]
    ,[Longitude] 
    ,[GeographyPoint]
FROM [MySourceTable]
WHERE [RouteID] = 1

UNION ALL

SELECT 
     [Latitude]
    ,[Longitude] 
    ,[GeographyPoint]
FROM [MySourceTable]
WHERE [RouteID] = 5

As I finalized my table design I considered using a computed column to store my geography data. But I encountered an issue when I went to add a spatial index.  Spatial indexes are built on top of B+ trees. They decompose space into 4 levels of grids. I think spatial indexes are interesting, but they have some restrictions of which you should be aware. They require the table to have a clustered primary key. They cannot be specified on indexed views. And…

Tip #3: You can create a computed column to store the geography point based upon the latitude and longitude. But you cannot create a spatial index on a computed column.

If you try to create a spatial index on a computed column you will get SQL Server error message 6342.

You don’t have to use spatial data types just because you have spatial data. Many data viz tools have built-in geocoding that will accept longitude and latitude or an address. But spatial data types can be useful when calculating distances between two points and planning and measuring routes.

Biml for a Type 1 Slowly Changing Dimension

I’ve been working on building my Biml library over the last few months. One of the first design patterns I created was a Type 1 Slowly Changing Dimension where all fields except the key fields that define the level of granularity are overwritten with updated values.  It assumes I have a staging table, but it could probably be easily modified to pull directly from the source table if needed.

For simplicity, I’m going to create DimSalesReason from the AdventureWorks database as my example. Please pardon the large amount of code in this post. I wanted to provide a fairly complete picture so you can see how the pieces fit together so I’m providing all of the SQL and Biml needed to generate the package.

Required Database Objects

My staging table is a copy of source table and looks like this:

CREATE TABLE [Staging].[SalesReason](
    [SalesReasonID] [int] NOT NULL,
    [Name] [nvarchar](50) NOT NULL,
    [ReasonType] [nvarchar](50) NOT NULL,
    [ModifiedDate] [datetime] NOT NULL
)

Here is the DDL for my dimension table:

CREATE TABLE [dbo].[DimSalesReason](
    [SalesReasonKey] int IDENTITY(1,1) NOT NULL Primary Key
    ,[SalesReasonID] int not null
    ,[SalesReasonName] nvarchar(50) not null
    ,[SalesReasonType] nvarchar(50) not null
    ,[HistoricalHashKey] varbinary(20) NOT NULL
    ,[ChangeHashKey] varbinary(20) NOT NULL
    ,[InsertDttm] datetime not null
    ,[UpdateDttm] datetime NULL
)

You’ll notice I have some audit fields in my table. UpdateDttm is the datetime when the row was last updated. InsertDttm is the datetime when the row was initially inserted. The HistoricalHashKey and ChangeHashKey are used for change detection. I may not need them in a dimension this simple, but I use them in larger/wider dimensions and I like my dimensions to be built consistently (unless they need to be tweaked for performance). The HistoricalHashKey represents the business key of the table that defines the level of uniqueness for the row in the dimension table. The ChangeHashKey represents all the other fields that are used in the dimension row. This makes it easier to determine if values have changed since I can compare just the ChangeHashKey instead of each individual field.

I also have an update table. This allows me to update in bulk rather than updating in place row by row. With a very small table, you might not notice a performance difference, but this pattern becomes much more efficient as the table grows.

CREATE TABLE [Updt].[UpdtSalesReason](
	 [SalesReasonID] int not null
	,[SalesReasonName] nvarchar(50) not null
	,[SalesReasonType] nvarchar(50) not null
	,[HistoricalHashKey] varbinary(20) NOT NULL
	,[ChangeHashKey] varbinary(20) NOT NULL
	,[InsertDttm] datetime not null
)

I use a view to do most of the transformation work (joins, business logic, etc.) for my Type 1 SCD package. Here’s my view for the SalesReasonDimension:

Create View [Staging].[DimSalesReason] as
With SalesReasonData as (
	Select	 [SalesReasonID] 
		,[Name] as [SalesReasonName] 
		,[ReasonType] as [SalesReasonType] 
		,CONVERT(VARBINARY(20), HASHBYTES('MD5', CONCAT(SalesReasonID, ' '))) 
                                                               AS [HistoricalHashKey]
		,CONVERT(VARBINARY(20), HASHBYTES('MD5', CONCAT(Name, ' ', ReasonType))) 
                                                                   AS [ChangeHashKey]
		,CURRENT_TIMESTAMP as InsertDttm
		,CURRENT_TIMESTAMP as UpdtDttm
		from 
		[Staging].[SalesReason]
	UNION 
	Select	-1 as [SalesReasonID]
	   , 'Unknown' as [SalesReasonName]
	   , 'Unknown' as [SalesReasonType]
	   ,CONVERT(VARBINARY(20), HASHBYTES('MD5', CONCAT(-1, ' '))) 
                                                               AS [HistoricalHashKey]
	   ,CONVERT(VARBINARY(20), HASHBYTES('MD5', CONCAT('Unknown', ' ', 'Unknown')))
                                                               AS [ChangeHashKey]
	   ,CURRENT_TIMESTAMP as InsertDttm
           ,CURRENT_TIMESTAMP as UpdtDttm
)
Select SalesReasonID
	   , SalesReasonName
	   , SalesReasonType
	   , HistoricalHashKey
	   , ChangeHashKey
	   , CONVERT(VARCHAR(34), HistoricalHashKey, 1) AS HistoricalHashKeyASCII
	   , CONVERT(VARCHAR(34), ChangeHashKey, 1) AS ChangeHashKeyASCII 
	   , InsertDttm
	   , UpdtDttm
from SalesReasonData

Using a view to do the transformation work allows me to use the SQL Server engine to do the things it is good at (joins, case statements, conversions) and SSIS to do the things it is good at (controlling the flow of data). It also makes it easy to create a design pattern in Biml that is abstract enough that it easily fits most SCD 1 scenarios. The hashkey fields allow me to do a lookup on one field to determine if the row already exists in the dimension table and a quick comparison to determine if any of the values for that row have changed. I chose to use an MD5 hash because I think it is a good balance of speed/size and collision risk. You may feel differently (others prefer SHA1 due to lower collision risk). On a dimension this size, the difference probably isn’t noticeable.

You can also see that I add my unknown member row in my view. Some people prefer to set the surrogate key of the unknown value to -1. I prefer to set the business key to -1 and let the surrogate key be set to any value in the load process. I’m not a fan of having to turn off the identity insert to add the unknown row. When I do the dimension key lookup for fact tables, I just look for the row where the business key = -1. This also means I don’t have to check my dimension tables to see if someone remembered to insert the unknown rows after deployment to a new environment because I know the unknown rows will be inserted when the package is run.

I have audit steps in my package that write to the Audit.PackageControl table using stored procedures.

CREATE TABLE [Audit].[Package_Control](
	[Package_NM] [varchar](100) NOT NULL,
	[Package_ID] [uniqueidentifier] NOT NULL,
	[Parent_Package_ID] [uniqueidentifier] NULL,
	[Execution_ID] [bigint] NULL,
	[Start_TS] [datetime] NOT NULL,
	[Stop_TS] [datetime] NULL,
	[Insert_Row_QT] [int] NULL,
	[Update_Row_QT] [int] NULL,
	[Unchanged_Row_QT] [int] NULL,
	[Deleted_Row_QT] [int] NULL,
	[Duration_s]  AS (datediff(second,[Start_TS],[Stop_TS])),
	[PackageLogID] [int] IDENTITY(1,1) NOT NULL
)

CREATE PROCEDURE [Audit].[PackageControlStart]
(
    @PackageName varchar(100)
,   @PackageId uniqueidentifier
,   @ParentPackageId uniqueidentifier = NULL
,   @ExecutionId bigint
,   @StartTime DATETIME
,   @StopTime datetime = NULL
,   @InsertRowQuantity int = NULL
,   @UpdateRowQuantity int = NULL
,   @UnchangedRowQuantity int = NULL
)
AS
BEGIN
    SET NOCOUNT ON;
	DECLARE @PackageLogId int 
    INSERT INTO [Audit].[Package_Control]
    (
        [Package_NM]
    ,   [Package_ID]
    ,   [Parent_Package_ID]
    ,   [Execution_ID]
    ,   [Start_TS]
    ,   [Stop_TS]
    ,   [Insert_Row_QT]
    ,   [Update_Row_QT]
    ,	[Unchanged_Row_QT]
    )
    SELECT
        @PackageName 
    ,   @PackageId 
    ,   @ParentPackageId 
    ,   @ExecutionId 
    ,   CURRENT_TIMESTAMP
    ,   @StopTime 
    ,   @InsertRowQuantity 
    ,   @UpdateRowQuantity 
    ,	@UnchangedRowQuantity
  SELECT @PackageLogID = SCOPE_IDENTITY()
  SELECT  @PackageLogID as PackageLogID
END

CREATE PROCEDURE [Audit].[PackageControlStop]
(
    @PackageId uniqueidentifier
,   @ExecutionId bigint
,   @InsertRowQuantity int = NULL
,   @UpdateRowQuantity int = NULL
,	@UnchangedRowQuantity int = NULL
)
AS
BEGIN
    SET NOCOUNT ON;
    -- Close out the execution.
    UPDATE PC
    SET [Stop_TS] = CURRENT_TIMESTAMP  
    ,   [Insert_Row_QT] = @InsertRowQuantity
    ,   [Update_Row_QT] = @UpdateRowQuantity
	,	[Unchanged_Row_QT] = @UnchangedRowQuantity
    FROM  [Audit].[Package_Control] AS PC
    WHERE PC.Package_ID = @PackageId
        AND PC.Execution_ID = @ExecutionId
        AND PC.[Stop_TS] IS NULL;  
END

And Now for the Biml

My Biml library usually contains 3 files for each package type:

  • ProjectConnections – I like to keep my connection managers in a separate file so I only have to update one place if I need to add or change a connection.
  • Dim1 – This contains my actual design pattern.
  • CreateDim1 – This is the Biml file I run to generate the package. It gets separated so I can pull values from databases and pass in variables to my design pattern using BimlScript. For this example I have hardcoded my variables into this file rather than pulling from a database.

ProjectConnections.Biml

<#@ template language="C#" tier="1" #>	
<Connections>
    <OleDbConnection Name="AWBIML" ConnectionString ="Data Source=localhost\SQL2014;
        Initial Catalog=AWBIML;Integrated Security=SSPI;Provider=SQLNCLI11.1;" CreateInProject="true"/>
    <OleDbConnection Name="Audit" ConnectionString ="Data Source=localhost\SQL2014;
        Initial Catalog=AWBIML;Integrated Security=SSPI;Provider=SQLNCLI11.1;" CreateInProject="true"/>	
</Connections>

The ProjectConnections file doesn’t begin with Biml tags because it gets included in the middle of the CreateDim1.biml file. AWBIML is a local database I created to contain my data mart.

Dim1.Biml

<#@ template language="C#" tier="2" #>

<#@ property name="PackageName" type="String" #>
<#@ property name="DstSchemaName" type="String" #>
<#@ property name="DstTableName" type="String" #>
<#@ property name="DstConnection" type="String" #>
<#@ property name="DataFlowSourceName" type="String" #>
<#@ property name="SrcConnection" type="String" #>
<#@ property name="SourceQuery" type="String" #>	
<#@ property name="UpdateSchemaName" type="String" #>	
<#@ property name="UpdateTableName" type="String" #>	
<#@ property name="UpdateConnection" type="String" #>		
<#@ property name="UpdateSQLStatement" type="String" #>	
	
<Package Name="<#=PackageName#>" Language="None">
    <Parameters>
        <Parameter DataType="String" Name="ParentPackageID">00000000-0000-0000-0000-000000000000</Parameter>
    </Parameters>
    <Variables>
        <Variable EvaluateAsExpression="true" DataType="String"  
          Name="QualifiedTableSchema">"["+@[User::SchemaName]+"].["+@[User::TableName]+"]"
        </Variable>
        <Variable DataType="String" Name="QueryAuditStart">
          EXECUTE [Audit].[PackageControlStart] @PackageName=?, @PackageId=?, 
          @ParentPackageId=?,@ExecutionId=?, @StartTime=?;
        </Variable>
        <Variable DataType="String" Name="QueryAuditUpdate">
          EXECUTE [Audit].[PackageControlStop] @PackageId=?, @ExecutionId=?, 
          @InsertRowQuantity=?, @UpdateRowQuantity=?, @UnchangedRowQuantity=?;</Variable> 
        <Variable DataType="Int32"  Name="RowCountChanged">0</Variable>
        <Variable DataType="Int32"  Name="RowCountNew">0</Variable>
        <Variable DataType="Int32"  Name="RowCountSource">0</Variable>
        <Variable DataType="Int32"  Name="RowCountUnchanged">0</Variable>
        <Variable DataType="String" Name="SchemaName"><#=DstSchemaName#></Variable>
        <Variable DataType="String" Name="TableName"><#=DstTableName#></Variable>
    </Variables>
    
    <Tasks>
      <ExecuteSQL Name="SQL Begin Audit" ConnectionName="Audit">
       <VariableInput VariableName="User.QueryAuditStart" />
        <Parameters>
          <Parameter Name="0" VariableName="System.PackageName" DataType="String" Length="-1"/>
          <Parameter Name="1" VariableName="System.PackageID" DataType="Guid" Length="-1"/>
          <Parameter Name="2" VariableName="ParentPackageID" DataType="Guid" Length="-1"/>
          <Parameter Name="3" VariableName="System.ServerExecutionID" DataType="Int64" 
            Length="-1"/>
          <Parameter Name="4" VariableName="System.StartTime" DataType="Date" Length="-1"/>
        </Parameters>
      </ExecuteSQL>

        <ExecuteSQL Name="SQL Truncate <#=UpdateTableName#>" 
          ConnectionName="<#=UpdateConnection#>"> 
            <DirectInput>Truncate Table [<#=UpdateSchemaName#>].[<#=UpdateTableName#>]
            </DirectInput>
          <PrecedenceConstraints>
            <Inputs>
              <Input OutputPathName="SQL Begin Audit.Output" />
            </Inputs>
          </PrecedenceConstraints>
        </ExecuteSQL>

        <Dataflow Name="DFT Insert<#=DstTableName#>">
            
            <Transformations>
            
                <RowCount Name="CNT Changed_Rows" VariableName="User.RowCountChanged">
                    <InputPath OutputPathName="CSPL Check For Changes.ChangedRows" />
                </RowCount>
                
                <ConditionalSplit Name="CSPL Check For Changes">
                    <InputPath OutputPathName="LKP Historical Key.Match" />
                    <OutputPaths>
                        <OutputPath Name="ChangedRows">
                          <Expression>ChangeHashKeyASCII != lkp_ChangeHashKeyASCII</Expression>
                        </OutputPath>
                    </OutputPaths>
                </ConditionalSplit>
                
                <RowCount Name="CNT New_Rows" VariableName="User.RowCountNew">
                    <InputPath OutputPathName="LKP Historical Key.NoMatch" />
                </RowCount>
        
                <Lookup Name="LKP Historical Key" NoMatchBehavior="RedirectRowsToNoMatchOutput" 
                  OleDbConnectionName="<#=DstConnection#>">
                    <DirectInput>SELECT
                      CONVERT(VARCHAR(34), ChangeHashKey, 1) AS ChangeHashKeyASCII
                      , CONVERT(VARCHAR(34), HistoricalHashKey, 1) AS HistoricalHashKeyASCII
                      FROM
                      <#=DstSchemaName#>.<#=DstTableName#></DirectInput>
                   <Parameters>
                     <Parameter SourceColumn="HistoricalHashKeyASCII" />
                   </Parameters>
	           <ParameterizedQuery>select * from (SELECT
                     CONVERT(VARCHAR(34), ChangeHashKey, 1) AS ChangeHashKeyASCII
                     , CONVERT(VARCHAR(34), HistoricalHashKey, 1) AS HistoricalHashKeyASCII
                     FROM
                     <#=DstSchemaName#>.<#=DstTableName#>) [refTable]
                     where [refTable].[HistoricalHashKeyASCII] = ?</ParameterizedQuery>
                  <InputPath OutputPathName="CNT Source_Rows.Output" />
                  <Inputs>
                    <Column SourceColumn="HistoricalHashKeyASCII" 
                      TargetColumn="HistoricalHashKeyASCII" />
                  </Inputs>
                  <Outputs>
                    <Column SourceColumn="ChangeHashKeyASCII" 
                    TargetColumn="lkp_ChangeHashKeyASCII" />
                  </Outputs>
                </Lookup>
       
            <OleDbDestination Name="OLE_DST New_Rows" ConnectionName="<#=DstConnection#>">
              <InputPath OutputPathName="CNT New_Rows.Output" />
	      <ExternalTableOutput Table="<#=DstSchemaName#>.<#=DstTableName#>" /> 
            </OleDbDestination>
            
            <RowCount Name="CNT Source_Rows" VariableName="User.RowCountSource">
              <InputPath OutputPathName="<#=DataFlowSourceName#>.Output" />
            </RowCount>
            
            <OleDbSource Name="<#=DataFlowSourceName#>" ConnectionName="<#=SrcConnection#>">
	      <DirectInput><#=SourceQuery#></DirectInput>
            </OleDbSource>
               
            <RowCount Name="CNT Unchanged_Rows" VariableName="User.RowCountUnchanged">
              <InputPath OutputPathName="CSPL Check For Changes.Default" />
            </RowCount>
                
            <OleDbDestination Name="OLE_DST Update Table" ConnectionName="<#=DstConnection#>"> 
              <InputPath OutputPathName="CNT Changed_Rows.Output" />
	      <ExternalTableOutput Table="[<#=UpdateSchemaName#>].[<#=UpdateTableName#>]" />
            </OleDbDestination>
    
     </Transformations>
    
     <PrecedenceConstraints>
      <Inputs>
          <Input OutputPathName="SQL Truncate <#=UpdateTableName#>.Output" /> 
      </Inputs>
     </PrecedenceConstraints>
    </Dataflow>
            
    <ExecuteSQL Name="SQL Update <#=DstTableName#>" 
    ConnectionName="<#=DstConnection#>">
      <DirectInput><#=UpdateSQLStatement#></DirectInput>
        <PrecedenceConstraints>
          <Inputs>
            <Input OutputPathName="DFT Insert<#=DstTableName#>.Output" /> 
          </Inputs>
        </PrecedenceConstraints>
    </ExecuteSQL>
    
    <ExecuteSQL Name="SQL Close Audit" ConnectionName="Audit">
      <VariableInput VariableName="User.QueryAuditUpdate" />
        <Parameters>
          <Parameter Name="0" VariableName="System.PackageID" DataType="Guid" Length="-1"/>
          <Parameter Name="1" VariableName="System.ServerExecutionID" DataType="Int64"
            Length="-1"/>
          <Parameter Name="2" VariableName="User.RowCountNew" DataType="Int32" Length="-1"/>
          <Parameter Name="3" VariableName="User.RowCountChanged" DataType="Int32" 
            Length="-1"/>
          <Parameter Name="4" VariableName="User.RowCountUnchanged" DataType="Int32" 
            Length="-1"/>
        </Parameters>
        <PrecedenceConstraints>
          <Inputs>
            <Input OutputPathName="SQL Update <#=DstTableName#>.Output" /> 
          </Inputs>
        </PrecedenceConstraints>
    </ExecuteSQL>
				    
  </Tasks>

</Package>

CreateDim1.Biml

<#@ template language="C#" hostspecific="true" #>
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<#@ include file="ProjectConnection.biml" #>

	<!--
	<#
	string PackageName  = "LoadDimSalesReason";
	string DstSchemaName = "dbo";
	string DstTableName = "DimSalesReason";
	string DstConnection = "AWBIML";
	string DataFlowSourceName = "OLE_SRC StgDimSalesReason";
	string SrcConnection = "AWBIML";
	string SourceQuery = @"SELECT [SalesReasonID]
					  ,[SalesReasonName]
					  ,[SalesReasonType]
					  ,[HistoricalHashKey]
					  ,[ChangeHashKey]
					  ,[HistoricalHashKeyASCII]
					  ,[ChangeHashKeyASCII]
					  ,[InsertDttm]
					  ,[UpdtDttm]
				  FROM [AWBIML].[Staging].[DimSalesReason]";
	string UpdateSchemaName = "Updt";
	string UpdateTableName = "UpdtSalesReason";
	string UpdateConnection = "AWBIML";
	string UpdateSQLStatement = @"Update d 
					set 
					d.SalesReasonName = u.SalesReasonName,
					d.SalesReasonType = u.SalesReasonType,
					d.changehashkey = u.changehashkey 
					FROM Updt.UpdtSalesReason u 
					inner join dbo.DimSalesReason d
                                        on u.HistoricalHashKey = d.HistoricalHashKey;";
	#>
	-->
	
    <Packages>
        <#=CallBimlScript("Dim1.biml", PackageName, DstSchemaName, DstTableName, DstConnection,
        DataFlowSourceName, SrcConnection, SourceQuery, UpdateSchemaName, UpdateTableName, 
        UpdateConnection, UpdateSQLStatement)#>
    </Packages>
	
</Biml>

Resulting SSIS Package

Executing the CreateDim1.Biml file generates a package called LoadDimSalesReason.dtsx. Here is the control flow:
BIML Dim1 ControlFlow
I start the package by logging the package start in my PackageControl table. Then I truncate my update table to prepare for the new data load. Next I have a data flow task, which inserts data into either the dbo.DimSalesReason table or the Updt.UpdtSalesReason table (see below). The SQL Update task updates the DimSalesReason table with the rows that were inserted into UpdtSalesReason. And finally, I log my package completion.

Here’s my data flow:
BIML Dim1 DataFlow
First, I retrieve my data from my staging view. Then I count the number of rows coming from my source so I can log it in my PackageControl table. The lookup on the HistoricalHashKey field sends rows with no match to be inserted into the dimension table. If the rows have a match, they are checked to see if their ChangeHashKey values match the value of the ChangeHaskey in the existing row in the dimension table. If ChangeHashKey values match, the row is counted and nothing else is done. If the ChangeHashKey values don’t match, the row is counted and written to the update table.

And that is all you need to create a Type 1 dimension load with Biml.

Fun With OPENROWSET

I’ve had several occasions to use OPENROWSET recently in T-SQL.  Although you can use it as an ad hoc alternative to a linked server to query a relational database, I’m finding it useful to get data from other sources. I’ll go into details of how to use it, but first I would like to acknowledge: OPENROWSET is not the answer to everything.  Sometimes you should use a linked server or SSIS or some other method.  I had some specific cases where I felt OPENROWSET made more sense for my task.

Importing Data from an Excel Spreadsheet

Technical need: I had some data that needed to be validated before importing into SQL Server from Excel as part of an upload process in an application.  If it didn’t pass validation, I did not want to import the data. Rather than use SSIS to import the data into a table and then check it, I used OPENROWSET to pull it directly from the Excel file into a temp table, ran all of my validation checks, and then imported the data using SSIS all within a stored procedure that was called by an application.  Field and table names have been changed to protect the innocent.

The stored procedure:

CREATE PROCEDURE dbo.MySproc
@datafile nvarchar(127) 
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;
    DECLARE
        @sql nvarchar(max)
    ,   @rowcount int
 SELECT
        @rowcount = 0

    DECLARE @FileExists int

    BEGIN TRY
        EXECUTE master.dbo.xp_fileExist @datafile, @FileExists out
        -- Value of 1 = file exists
        -- Value of 0 = file doesn't
        IF @FileExists = 0
        BEGIN
            SELECT
            CONCAT('The file ', @datafile, ' could not be found.') 
                                                   AS reasontext
        ,   NULL AS field2
        ,   NULL AS field3;            
            RETURN -1;
        END
    END TRY
    BEGIN CATCH
        SELECT
            ERROR_MESSAGE()  AS reasontext
        ,   NULL AS field2
        ,   NULL AS field3;
        RETURN -1;

    END CATCH
 TRUNCATE TABLE dbo.finaldest;
    IF OBJECT_ID('tempdb..##tempf') IS NOT NULL
    BEGIN
        DROP TABLE ##tempf;
    END;

--Create temp table to hold values from Excel file
Create table ##tempf (
Field1 [nvarchar](255),
Field2 [nvarchar](255),
Field3 [nvarchar](255),
Field4 [nvarchar](255),
Field5 [nvarchar](255),
Field6 [nvarchar](255)
);
--dynamic sql to populate temp table with values from Excel file
    SELECT
    @sql = N'INSERT INTO ##tempf
   SELECT
   [Field1],
   [Field2], 
   [Field3],
   [Field4],
   [Field5],
   [Field6],
    FROM OPENROWSET (
                ''Microsoft.ACE.OLEDB.12.0'',''Excel 12.0;Database='
                + @datafile
                + ';HDR=YES;IMEX=1'', 
                ''SELECT s.* FROM [Sheet1$] AS S 
                WHERE [Field2] IS NOT NULL AND [Field3] IS NOT NULL'')';

    BEGIN TRY 
        EXEC sp_executesql @sql;
    END TRY
    BEGIN CATCH
        SELECT
            'File not found or invalid file type' AS reasontext
            --ERROR_MESSAGE()  AS reasontext
        ,   NULL AS Field2
        ,   NULL AS Field3;
 IF OBJECT_ID('tempdb..##tempf', 'U') IS NOT NULL 
        BEGIN
            DROP TABLE ##tempf
        END
      RETURN -1
    END CATCH
/*Do lots of data validation here
...
*/
--if data validation tests are passed, write to final destination table
Insert INTO dbo.[finaldest]
   ([Field1],
      [Field2], 
      [Field3],
      [Field4],
      [Field5],
      [Field6])
   Select [Field1],
      [Field2], 
      [Field3],
      [Field4],
      [Field5],
      [Field6] 
      from ##tempf;

 Drop table ##tempf; 
END

To make this work you need to install the Microsoft ACE OLE DB 12.0 provider on your server. You cannot use the Jet OLEDB Provider with 64-bit SQL Server and 64-bit Excel. Once you have this installed you need to change some settings:

EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', 
N'AllowInProcess', 1
GO
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', 
N'DynamicParameters', 1
GO
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
GO
RECONFIGURE;
GO

Here is a blog post that explains the errors you might get if you don’t have these settings correct. It also notes that you can go check your settings in the registry keys. For SQL Server 2012 the path is HKEY_LOCAL_MACHINE -> SOFTWARE -> Microsoft -> Microsoft SQL Server -> MSSQL11.MSSQLSERVER -> Providers -> Microsoft.ACE.OLEDB.12.0.  You can then see that AllowInProcess and Dynamic parameters are set to 1.

RegKeyACEOLEDB

This provider seems to be a bit fussy and there are some things that took me a while to troubleshoot once I got started.  First, make sure that the location of the Excel file you are reading is accessible to the account that is running the SQL engine and doesn’t require admin access. The Linked Server errors that get returned with this really aren’t that helpful, so you have to use your Google-fu to work your way through it.  I used a global temporary table so it would be accessible across connections if it needed it to be (I was having trouble when I used a local temp table).  The application that calls this stored procedure queues up the calls and executes them serially so one can’t override the other.

Returning results from an Analysis Services cube in SQL

Technical need: I had some data (several facts with common dimensions) with calculations in a multidimensional cube and an application that needed that data but couldn’t/wouldn’t consume the MDX result set.

The query: For demonstration purposes I’m using a small query that returns the values for a drop-down box for selecting a member of the date dimension.

select a."[Measures].[DateKeys]" as DateKey, 
a."[Measures].[DateLevel]" as DateLevel, 
a."[Measures].[DateValues]" as DateValue 
from OPENROWSET 
('MSOLAP','Datasource=localhost; Initial Catalog=MyCubeDB;',
  
'with member [Measures].[DateValues] as
    Iif([Date].[Fiscal Calendar].currentmember.level_number = 2, "    " 
    + [Date].[Fiscal Calendar].currentmember.member_value, 
    [Date].[Fiscal Calendar].currentmember.member_value)
  member [Measures].[DateKeys] as 
    [Date].[Fiscal Calendar].currentmember.uniquename
  member [Measures].[DateLevel] as 
    [Date].[Fiscal Calendar].currentmember.level_number

select
 {[Measures].[DateValues], [Measures].[DateKeys], 
                           [Measures].[DateLevel]} on 0 ,
 {Exists(Descendants([Date].[Fiscal Calendar].[FY2014], 1, 
                         SELF_AND_BEFORE), , "MyMeasureGroup") } on 1
 from [MyCube]
') as a;

You can see that I explicitly named my fields, but you can also do a select * to get all the fields from the OPENROWSET. I had to put the fields from the cube in quotation marks to make it work because of the brackets in the field names. For Analysis Services, MSOLAP is the name of my provider.  You must have the correct version of this provider listed under the linked servers on the server where this is being executed.

MSOLAP ProviderFor SQL Server 2012 SP1, you can download the Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2012 SP1 here. There is nothing special about the data source and initial catalog. They are exactly the same connection info you would use when connecting to SSAS through Management Studio.  The last part is simply the MDX query.  You should be able to copy that part out and run it in Management Studio connected to your SSAS database and get results.

Inserting Images Into a Database

Technical need: I was building a demo for Power BI and I needed to show some images in my Power View reports in Office 365.  In order for the images to show in Office 365, you must import the images into SQL Server and then import them into Power Pivot. First I downloaded images of flags for each country from the CIA world factbook.  I took the folder of flags and put it in my pictures folder. I already had a table in SQL Server that contained country names and codes.  So I just needed add a flag for each country.

The query:

  CREATE TABLE [dbo].[countryflagsdemo]
  (
  [id] int, 
  [country] varchar(100), 
   varchar(2),
  flagimage [varbinary](max)
  )
 --get ID, country, and code from existing table
  INSERT INTO dbo.countryflagsdemo (id, country, code)
  SELECT id, country, code FROM dbo.Countryflag1;

  DECLARE @i INT; 
  DECLARE @rows INT; 
  DECLARE @code VARCHAR(2);
  SET @i = 1;
--get max id value for loop
  SET @rows = (SELECT COUNT(country) FROM dbo.countryflagsdemo)
  WHILE (@i < @rows)
  BEGIN 

    DECLARE @sql VARCHAR(MAX); 
    SET @code = (SELECT code FROM dbo.countryflagsdemo WHERE [id] = @i);
    SELECT @sql = 'UPDATE dbo.countryflagsdemo 
      SET flagimage = 
      (SELECT BulkColumn FROM 
        OPENROWSET (BULK 
            ''C:\Users\mlongoria\Pictures\flags\large\' + @code 
               + '-lgflag.gif'', Single_Blob) as flagimage)
            WHERE  = ''' + @code + ''';'

    EXEC (@sql);
    SET @i = @i + 1; 
    --PRINT @code;

END 
GO

I figured out how to do this based upon this MSDN blog post. I inserted my other data first and then added the images.  I’m sure you could find a way to do this all in one select statement.  Since this was a one-time thing I didn’t see the need to find another way. It should be noted that the bulk insert must pull from a location on your computer; you can’t use links to the images on the internet. I downloaded the images in a folder. Each image name was the abbreviation for the country. If you are going to import this data into PowerPivot, be sure you change the table behavior properties so it will recognize the varbinary field as an image.

 

There may be other ways to accomplish my goals, but it’s nice to understand how to use OPENROWSET and have that in my toolbox.