SSIS, Data Warehousing, Microsoft Technologies

My Preferences for SSIS Design

Lately, I have been using SSIS execution frameworks and Biml created by other people to populate data marts and data warehouses. It has taught me a few things and helped me clarify what I like and dislike compared to my usual framework. I’ve got the beginning of my preferences list started below. There are probably situations where I would want to deviate from my preferences, but I think they make a good starting point.

Populating Data

  • For self-service BI environments, a date dimension that doesn’t go out much further than the greatest date in your data. This can be a view or stored procedure that limits and updates dates rather than a static date dimension that goes out until the end of time.
  • Unknown values are included in normal dimension loads, not in separate scripts that must be run on deployment. This way, if an unknown value is ever left out or deleted, it will be added in the next data load rather than requiring a special execution of a script.
  • Every table should have InsertDateTime and UpdateDateTime columns. The UpdateDateTime column should be populated with the same value as the InsertDateTime column upon creation of the row, rather than being left null.
  • Whatever you use to create tables, include primary keys, foreign keys, and indexes with your table definitions. Provide explicit constraint names to simplify database comparisons. You can disable your foreign keys, but they need to be there to provide that metadata.
  • Separate your final dimensional/reporting tables from audit tables and staging tables. This can be done with separate schemas or even separate databases.

Data Integration Process

  • There should be consistent error handling in each layer (staging, dims, facts, etc.). If you write errors to another location (flat file, database table), have a process that notifies the right people that errors occurred. The process of consuming corrected data must be built, tested, and integrated into the existing process.
  • Make your error handling process reflect what end users need to see when an error occurs. Does it make sense to have a partial load when there is an issue? Or should it be all or nothing?
  • Have smart master packages that determine which packages to run. Don’t check whether the package should run inside of the package itself – do that in the master package.
  • Master packages should execute child packages in parallel as much as possible rather than defaulting to sequential execution.
  • Have an audit log with one row per package. Include the SSIS ServerExecutionID in the audit log – not the package -specific ID but the execution ID for the entire run. If there are incremental loads, the where clause used to filter the load should be captured in the audit table. Include row counts as well as package start and stop time in your audit log.
  • Add an AuditLogID column on your dimension, fact, and staging tables so you can trace each row back to the process that populated it.
  • For dims and facts, perform change detection/deduplication of records, usually through hash values and either SSIS lookups or SQL queries with WHERE NOT EXISTS.
  • Avoid T-SQL MERGE statements. Write individual insert/update/delete statements. This avoid any bugs in MERGE and makes your SQL easier to understand and troubleshoot.
  • Use consistent naming of tasks, source, destinations, packages, connection managers, etc. Connection managers pointing to databases should have names that refer to the database rather than the server.
  • If you are downloading files, move the files to an archive folder once files are processed. You can have rules in place if you have retention limits. But you probably need to keep files from at least the last load for audit and troubleshooting purposes. This could change if you are importing very sensitive data.
  • Even if you need to copy all columns from a table, write a select statement for database sources that explicitly names fields rather than using SELECT *. or just selecting the table or view.
  • SSIS lookups should use an explicit query rather than referencing an entire table.
  • Implement restartability at the package level for most packages (you should have single-purpose packages executed by a master package). Checkpoints are ineffective within a package. If you build your audit log table correctly, you can get the list of packages that have not run in the last X minutes/hours and feed that to your master package.
  • Send email from your scheduling tool rather than within an SSIS package.
  • Track data lineage in your tables. This can be as simple as having a table that lists all of your data sources with an ID column and including that ID value in each row of your staging, fact, and dimension tables.
  • Dims and facts are not truncated. Data should be inserted and updated (and deleted, if necessary).
  • Connection strings used in multiple packages should be project-level connection strings.

Biml Specifics

  • Understand whether you need a flexible Biml Framework or just an accelerator for a current project. If you need flexibility, don’t hardcode connection strings and other things that change when you add/change sources and destinations. If you just need to accelerate development of a simple data mart, total flexibility may be overkill and actually cause more work.
  • Have a single place where you add synthetic metadata, as much as possible. BimlScript gets messy and difficult to understand when you have some extended properties that are read in, some annotations added directly, and some variables defined in your code. This is why I like synthetic metadata stored in a database. Also, extended properties don’t exist in Azure SQL Data Warehouse, so if you need your framework to work there you can’t go that route.
  • Don’t repeat your code in multiple files. If you have some logic that gets reused, move it to a separate file and reference it from other files.

What Do You Think?

What’s on your SSIS preferences list? Do you disagree with one of my preferences and want to share your knowledge? Let’s chat in the comments.

Data Warehousing, Microsoft Technologies, SQL Server, SSIS, T-SQL

Why We Don’t Truncate Dimensions and Facts During a Data Load

Every once in a while, I come across a data warehouse where the data load uses a full truncate and reload pattern to populate a fact or dimension. While it may not be the end of the world for a small table, it does concern me and I usually recommend to redesign the load. My thoughts below on why this is an anti-pattern are true for using the actual TRUNCATE TABLE statement as well as executing a DELETE statement with no WHERE clause.

Surrogate Keys

Dimensional models use surrogate keys rather than natural keys. Surrogate keys are system-generated, meaningless values that are usually integers used to uniquely identify a record. They provide good performance for joins in queries, allow us to switch or use multiple source systems to feed the same tables, and facilitate the use of slowly changing dimensions. If you truncate a dimension table and then repopulate it, you will end up with different surrogate keys assigned to your dimension values. Let’s say we have a Geography dimension that looks like the table below.

GeographyKeyCityStateProvinceCountryPostalCode
1DenverColoradoUnited States80205
2DallasTexasUnited States75201
3ParisÎle-de-FranceFrance75001

If I truncate and reload this table, there is no guarantee that Denver will have a GeographyKey value of 1. I might reload the data and then have Paris as 1, Denver as 2, and Dallas as 3. And since we use surrogate keys as dimension lookups in fact tables and bridge tables (and snowflaked dimensions, if you use those), I would now need to update every table that references this dimension. That’s a lot of table updates without a good reason.

Error Handling

Another reason to avoid truncating the tables in your dimensional model is error handling. When you design your data load, you need to think about what should happen when it fails and where it is most likely to fail. Failures will often occur either in data transformation steps or upon inserting/updating values in the destination table due to data type/size conflicts.

Let’s say you truncate your fact table and then you encounter an ETL failure while performing the transformations to reload the table. Now you have an empty fact table. If you have error handling logic in your ETL, you may be able to redirect the error rows to another location to be handled later. But that still leaves you with an incomplete table at best. If instead of truncating and reloading, you were doing inserts and updates when you encountered an error, your table might reflect slightly stale data as of the previous load. You still might have a situation where your table is partially updated with the rows that were inserted before the error was encountered depending on how your ETL design. But having a fact table with old or partially updated data is usually (but not always) more preferable than having no data when a data load process fails.

Performance and Data Availability

When you truncate and reload a table, you are assuming you have access to all of the source data needed to fully repopulate the table. This may be true today, but what about next year when you switch CRM systems? Or what if your organization makes the decision to archive data in the source system that is older than 5 years? If any data needed for your fact or dimension becomes unavailable, your truncate and reload pattern will fail to serve your needs.

Even if you are sure your data will be available, you may want to consider your data load times. If you only have a handful of tables with a few hundred thousand rows max, you may load your data warehouse in a few minutes. But what happens when you have a few hundred million rows with some complicated transformation logic in your pipelines? You might be adding minutes to hours to your load times.

Why Do ETL Developers End Up Truncating Fact Tables?

Sometimes developers just don’t know better. But often there were understandable reasons for using the truncate and reload pattern. While I have never built a system where I truncated a dimension, I have had a couple where we at least started out with truncating the fact table. We usually built it to make data initially available with correct values, and then worked with stakeholders to find a different way to access the data or to have them generate the data differently to alleviate the problems that caused us to want to truncate and reload.

Often the cause is data quality issues. On one project I worked, we had dimension values that defined the granularity of the fact table that would change in the source systems, and the process to try to update them in the fact table was too complicated and took too long. So we made the decision to truncate and reload, understanding the risks of doing so. Having the data available and usable for analysis gave us more information on how useful the data was to users and helped us work to understand why and how our keys were changing. Over time, we were able to influence the way data was entered into source systems so that we didn’t have to go through the truncate and reload process and could perform incremental loads on our fact table.

Sometimes source systems allow hard deletion of data and don’t maintain a list of what was deleted. And depending on access methods and the size of the data, it can be difficult to compare the rows in a fact table with the source data to identify the deleted rows. I would argue this is a poor design of the source system, but we often can’t change that, and we still need to include the data in our data warehouse. So it’s understandable why someone in this situation would want to truncate and reload a fact table.

If your stakeholders are ok with the potential of long load times, empty tables when errors occur, and the assumption that source data will always be available, then there isn’t much problem with truncating and reloading a fact table. But that is often not the case. And that decision should be made explicitly rather than implicitly.

What To Do Instead of Truncating and Reloading a Fact Table

If you’ve been truncating and reloading a fact table because that just seemed like the simplest thing to do, you can change your load pattern.

My general load pattern is:

  1. Truncate the update table
  2. Insert new rows into the fact table and changed rows into the update table
  3. Perform set-based updates on the fact table based upon the data in the update table

I’m able to determine what rows are new, changed, and unchanged by using hash values. I concatenate and hash the values that define the level of uniqueness of the row into one value I call HistoricalHash and the values from the remaining columns into a value I call ChangeHash. In other words, I load a transactional or periodic snapshot fact table in a manner similar to a Type 1 slowly changing dimension.

If you have data quality, data deletion, or other issues that prevent you from using a change detection pattern like the above, consider using a staging table and swapping it out with the fact table. Create a staging table that looks exactly like your fact table.

  1. Truncate the staging table
  2. Populate the staging table with all rows that should be in the fact table
  3. Swap the staging table and the fact table (this usually involves renaming the tables)

Understand the Consequences of Your Design

If you are choosing to truncate a dimension or fact table, be sure that you understand the trade-offs. You may be unwittingly opting for simplicity of ETL over data availability and efficient data loads.

Biml, Microsoft Technologies, SSIS

Biml for a Task Factory Dynamics CRM Source Component

I recently worked on a project where a client wanted to use Biml to create SSIS packages to stage data from Dynamics 365 CRM. My first attempt using a script component had an error, which I think is related to a bug in the Biml engine with how it currently generates script components, so I had to find a different way to accomplish my goal. (If you have run into this issue with Biml, please comment so I know it’s not just me! I have yet to get Varigence to confirm it.) This client owned the Pragmatic Works Task Factory, so we used the Dynamics CRM source to retrieve data.

I was ultimately creating a bunch of packages that looked like the below.

There are two pieces to using the Task Factory Dynamics CRM Source: the source component and the connection manager. The code for both is below.

Things to note for the source component:

  •  This is pulling data for an entity in Dynamics 365. If you were to populate this by hand, the Task Factory UI would ask you which entity. There are several entities that are visible, but do not allow you to retrieve the data. You can find a list of internal entities here.  You won’t want to stage data from those tables. I generated all of my packages at once with Biml, using a Blacklist to exclude the internal entities.
  • The entity name that should be the value on line 21 is the entity logical name.
  • I’m iterating through columns in tables in a separate file and holding them in memory, hence the references to Table.columns. You’ll need to write some code to get the entity metadata to feed that, or find another way to provide the table names and column names and data types.
  • You must set the ErrorRowDisposition and TruncateRowDisposition to “NotUsed” as I did on line 10 for each column or your package will not work.
  • In OutputColumns and ExternalColumns collections, I just have if statements to check data types. There may be a more elegant way to do this, but this worked for me and I went with it. Feel free to leave suggestions in the comments if you have a better idea.
  • The Connection element on line 145 should keep the name “DYNAMICSCONNECTION”. If you change it, the connection won’t work. In my actual project, my connection manager is in a separate file and is named “TF_DynamicsCRM”. You can set the ConnectionName property to whatever you want, as long as it matches your connection definition (Line 153 in the gist).

Things to note for the Connection Manager:

  • I tried reverse engineering the connection manager in Biml Studio, and that got me close, but it didn’t get the ObjectData property quite right. I ended up having to create one manually, view the code for it in SSDT, and copy it into my Biml file. I have my packages and project set to EncryptSensitiveWithPassword, which is why you see the p4:Salt value on line 158. Your connection manager will have different values for lines 158, 159, and 160. If you set your project to EncryptSensitiveWithPassword, this value will stay consistent across developers. But if you use EncryptSensitiveWithUserKey, the value may change, which will be fun for regenerating with Biml. So make sure you plan for that if you have multiple developers or multiple computers on which you are working.
  • This connection manager is set to connect to Dynamics 365, hence the ServerHost=disco.crm.dynamics.com. If you have an on-prem or hosted environment that isn’t 365, you’ll have to find the correct ServerHost value and put it in line 165.

I hope that helps someone looking to generate the Task Factory Dynamics Source with Biml.

Excel, Microsoft Technologies, Power BI, SSIS

Installing the Microsoft.ACE.OLEDB.12.0 Provider for Both 64-bit and 32-bit Processing

I recently got a new laptop and had to go through the ritual of reinstalling all my programs and drivers. I sometimes work with SSIS locally to import data from Excel and occasionally do demos with Power BI where I read from an Access database so I needed to install the ACE OLE DB provider.  If you try to do either of those things without the ACE OLE DB provider you will get a message like the one below.

The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine.

I needed both the 32-bit and the 64-bit adapters to cover my various use cases. I started by installing the 64-bit Microsoft Access Database Engine 2010 Redistributable.  If you have ever attempted this, you know there is a secret. You cannot install the 2010 Access database engine with a different processing architecture (bittedness) from your Office install (or install both versions of the 2010 Access database engine) without a workaround.  If you try, you will get an error message.

ACE engine bittedness error

There are two options for getting the other version installed.

Option A: Install the 2007 Office System Driver: Data Connectivity Components

The 2007 Access database engine can read the same format, but didn’t have a 64-bit install. It doesn’t have the check to see what other products are on your machine and can co-exist peacefully with the 64-bit 2010 Access engine.

Option B: Install from the Command Line and Use the Passive Switch

  1. Open the command prompt.
  2. Type the file path and file name for your Access Database Engine install file.
  3. Follow it with a space and then “/passive” and then hit Enter.
  4. Delete or rename the mso.dll registry value in the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\14.0\Common\FilesPaths

Note: A helpful commenter has informed me that for Access 2016 the switch is /quiet rather than /passive.

ACE install cmd

I wrote up a document for this a while back for a client, but I can never remember where I put it. So I’m posting this on my blog to make this info easier to find for me as well as you.

Azure Data Factory, Biml, Conferences, SSIS

I’m Speaking at IT/Dev Connections 2017

I’m pleased to say that I am speaking at IT/Dev Connections 2017. This year the conference will be held in San Francisco October 23-26. I had a great experience speaking at IT/Dev Connections in 2015, so I am excited to return again this year.

This conference is special to me because of its focus on providing great content for developers and IT pros – the conference website describes it as the “anti-keynote” conference with no forced marketing content.

I also enjoy it because it is more than just SQL Server/Data Platform (they have tracks for Cloud & Data Center, Enterprise Collaboration, Development & Dev Ops, and Enterprise Mobility and Security), and it’s nice to get out of my comfort zone a bit.

I will deliver two sessions at the conference.

Azure Data Factory in A Nutshell

If you have been wanting to get into Azure Data Factory (ADF) development, join me for this demo-filled overview. In this session, we’ll go over the basic anatomy of an ADF solution. You’ll learn what ADF is and isn’t as we walk through a solution to pull data from an on-premises SQL Server database to a blob storage and then populate and Azure SQL Data Warehouse. You’ll learn tips for creating ADF solutions in Visual Studio, and I’ll show you how to make ADF development less tedious with a free Visual Studio Add-in called Biml Express. You’ll leave with a basic understanding of ADF and a list of tools and skills you’ll want to acquire as you begin your ADF development.

Improve Data Warehouse ETL Delivery with a Patterns-Based Approach

What if I told you that 90% of your data integration development in SQL Server could be automated? In 5 years, you will be “old fashioned” if you are hand coding SSIS packages. Developers with different skill levels and design preferences create databases and SSIS packages however they see fit to get the job done. Documentation is frequently omitted. Maintenance and small enhancements consume too much development time while manual errors and inconsistencies slip through the testing and release process. You can use tools and frameworks to rearrange the development process and alleviate these common problems. The implementation and automation of design patterns leads to improved efficiency and communication. Join me in this session to learn how to use Business Intelligence Markup Language (Biml) and Excel to facilitate metadata-driven SSIS development. I’ll use database schema information plus Excel inputs to implement a small data mart from staging through the dimensional model.

I hope you will join me in San Francisco in October!

Microsoft Technologies, SSAS, SSIS

Process Compatibility Level 1200 SSAS Tabular Model from SSIS 2014

A client wanted to upgrade their SSAS model to SSAS 2016 to take advantage of some of the features of the new level 1200 compatibility model. But they weren’t yet ready to upgrade their SSIS server from SQL 2014. This presented a problem because they had been using the Analysis Services Processing Task to process their tabular model nightly. This processing task in SSIS 2014 uses the old Analysis Management Objects, which aren’t compatible with the new SSAS tabular models.

Attempting to use the AS Processing Task results in the following error: “[Analysis Services Execute DDL Task] Error: This command cannot be executed on database ‘MySSASDB’ because it has been defined with StorageEngineUsed set to TabularMetadata. For databases in this mode, you must use Tabular APIs to administer the database”

The reason for keeping SSAS processing in an SSIS package was because it kept consistent logging throughout their data refresh process. So we set out to find another solution.

The new SSAS Tabular models use Tabular Model Scripting Language (JSON) rather than XMLA. A simple process full command in TMSL might look like:

{
  "refresh": {
    "type": "full",
    "objects": [
      {
        "database": "MySSASDB"
      }
    ]
  }
}

Just pasting the JSON to process the model into an Analysis Services Execute DDL Task didn’t work. It returned the error “DDL is not valid”.

I asked around and received a couple of solutions that worked.

Option 1 (my less preferred option): Create an OLEDB connection manager (rather than MSOLAP) and use an Execute SQL Task.

You can set up an OLE DB connection that looks like the below.  Create a new connection manager and choose the OLE DB connection manager type. Change the provider to Microsoft OLE DB Provider for Analysis Service 13.0 and fill in your connection information.

ms-oledb-provider-for-as-conmgr

You’ll see the connection manager show up in the Connection Managers pane looking like this:

ssas-oledb-prov

Drag in an Execute SQL task, use the previously defined connection manager and paste in the JSON. This works just fine, but feels a bit too much like a workaround rather than a solid solution to me.

Option #2: Use an Analysis Services Execute DDL task and wrap the JSON in XMLA

Drag in an Analysis Services Execute DDL task. Create a new connection manager by choosing New Analysis Services Connection.

ssis-new-ssas-con

Edit your connection information, click OK, and you will end up with a connection manager that looks like this:

ssis-ssas-con-mgr

Use that connection manager in the AS Execute DDL task. We can use the same JSON from earlier and wrap it in XMLA as shown below.

<Statement xmlns="urn:schemas-microsoft-com:xml-analysis">
{
  "refresh": {
    "type": "full",
    "objects": [
      {
        "database": "MySSASDB"
      }
    ]
  }
}

</Statement>

The XMLA/JSON command can be a direct source statement or placed in a variable and referenced from the task.

To test that the model is successfully processed, you can execute the SSIS task or package and then run the following query against the DMV.

Select [catalog_name], [date_modified], [compatibility_level] 
from $SYSTEM.DBSCHEMA_CATALOGS where [Catalog_Name] = 'MySSASDB'

So if you are caught between versions in SSIS and SSAS, do not despair. You can still process your new SSAS Tabular model from an SSIS package.

Biml, Microsoft Technologies, SSIS

BimlScript – Get to Know Your Control Nuggets

This is post #2 of my BimlScript – Get to Know Your Code Nuggets series. To learn about text nuggets, see my first post.  

The next type of BimlScript code nugget I’d like to discuss is the control nugget. Control nuggets allow you to insert control logic to determine what Biml is generated and used to create your BI artifacts (packages, cubes, etc.).  Control nuggets can be used to:

  • Add conditional logic so that specified Biml fragments are only generated when certain criteria are met
  • Add loops to repeat Biml fragments multiple times
  • Define variables that will be used later
  • Access external data from databases, flat files or other sources that will be used in the generation of the final Biml code

Control nuggets start with <# and end with #>. Just like text nuggets, control nuggets can be a single line or multiple lines. And they can contain simple or complex logic.

There were actually control nuggets in the text nugget example from the previous post. The variable declarations at the top of the file (lines 4 – 10) are control nuggets.

Below is another example file that uses code nuggets. I like to separate my design patterns. project connections, and package generation into separate files. The BimlScript shown below is from a slightly altered “caller file”, the file that I would execute to create the SSIS packages that should follow a specified design pattern. This caller file is generating Type 1 slowly changing dimensions.

In my Biml framework, I store the data I need in order to generate my packages in SQL tables. You can see the control nugget in lines 14 – 40 retrieving data from my database and storing it in a data table for later use.

On line 46, there is a control nugget containing a for each loop. For each row in the data table, it calls a BimlScript file that creates a package that uses my Type 1 SCD design pattern with the variables from data table. Notice that the end curly brace for my loop is in a separate control nugget.

There are many different ways to use control nuggets that aren’t covered here.  From these two examples you can start to see how I might combine text nuggets and control nuggets to automate my SSIS package creation and employ consistent design patterns. But there are still a few missing pieces that will be filled in when I cover the remaining code nugget types.