Create a Date Dimension in Azure SQL Data Warehouse

Most data warehouses and data marts require a date dimension or calendar table. Those of us that have been building data warehouses in SQL Server for a while have collected our favorite scripts to build out a date dimension. For a standard date dimension, I am a fan of Aaron  Bertrand’s script posted on MSSQLTips.com. But the current version (as of Aug 8, 2016) of Azure SQL Data Warehouse doesn’t support computed columns, which are used in Aaron’s script.

I adapted Aaron’s script to work in Azure SQL Data Warehouse and am sharing it with you below, so you don’t have to do the same. I did leave out holidays because I didn’t need them in my calendar table.  To add them back in, just add the column back to the table and use Aaron’s update statements. I also changed the MonthYear field to include a space between the month and year. Otherwise, my script should produce the same results as Aaron’s in a Azure SQL DW friendly way. Notice that I chose to use Round Robin distribution for my date table. Also, tables are now created with clustered columnstore indexes by default in Azure SQL DW.

Instead of the computed columns, I inserted the dates, then updated the other columns in the temporary table. Then I used the Create Table As Select syntax to create my final RPT.Calendar table.

Happy time trending!

PolyBase Is A Picky Eater – Remove Carriage Returns Before Ingesting Text

I’ve spent the last few weeks working on a project that used PolyBase to load data from Azure Blob Storage into Azure SQL Data Warehouse. While it’s been a great experience, I must note that PolyBase is a picky eater.

4184268 - finicky or picky bulldog pouting beside full bowl of dog food
PolyBase is a picky eater, just like this bulldog.

Creating the external tables is fairly simple if you are already familiar with T-SQL. The syntax for creating external data sources, file formats, and tables is fairly well documented on MSDN. It’s important to be aware of features and data types that aren’t supported, but I found it to be pretty smooth sailing beyond that.

But just because you have successfully created the external tables does not mean you are finished. That is when the “fun” begins. If you would like more information on why “fun” is in quotes, read Grant Fritchey’s blog post on Loading Data into Azure SQL Data Warehouse.

You should test after populating any table in SQL Server, but I think this is especially true with external tables. More than likely you will find that you must resolve several issues with source file contents and external table definitions.

First let me say that PolyBase is cool. I can query data in text files and join to tables in my database. Next let me say PolyBase is a fairly young technology and has some limitations that I imagine will be improved in later versions.

One of those limitations (as of July 30, 2016) is that while you can declare your field delimiter and a string delimiter in external file formats, the row delimiter is not user configurable and there is no way to escape or ignore the row delimiter characters (\r, \n, or \r\n) inside of a string. So if you have a string that contains the row delimiter, PolyBase will interpret it as the end of the row even if it is placed inside of the string delimiters.

To elaborate further, I had several fields that originally came from a SQL Server table and were of type text. Some of the values in these fields contained newlines (\n) as users had typed paragraphs and addresses into the fields in the source application. The data from the source tables was exported from SQL Server to Azure Blob Storage using Azure Data Factory with a simple copy pipeline with no modifications to the data. The problem is that Hive, PolyBase, and several other tools have issues reading strings with newlines the value. They immediately interpret it as the end of the row. There is no escape character or setting you can use to allow newlines in the values.

If you find yourself in a similar situation, trying to load data from delimited files into Azure SQL DW and realizing you have newlines inside of string fields, there are two things you can do.

  1. Fix the data in the flat files so it doesn’t contain new lines in string fields.
  2. Switch to a different tool to load data to Azure DW. Azure Data Factory can take the data from blob storage and import it into a normal table in Azure DW.

In most circumstances, I would go for option #1. Option #2 only fixes things in Azure DW, leaving other tools in the environment to deal with the issue separately, and it requires storing a copy of the data in the DW.

In my project, I changed the ADF pipelines to replace newlines with an acceptable character/set of characters that doesn’t often appear in my data set and doesn’t obscure the values. We chose to replace them with 4 spaces. It’s important to understand that this means that your data in your blob storage will no longer exactly match its source. This is something you will want to document because it will surely pop up somewhere in the future.

Updating the ADF pipelines is not much effort. If my table definition is

CREATE TABLE TableA (
    Column1 int,
    Column2 varchar(10),
    Column3 text
)

and my original pipeline sqlReaderQuery was SELECT * FROM TableA, I just need to enumerate my fields, convert the text fields to varchar, and replace the new lines.

SELECT
    [Column1],
    [Column2],
    [Column3] = REPLACE(
                  REPLACE(
                    CAST([Column3] AS varchar(8000)), 
                      CHAR(13), '    '), 
                      CHAR(10), '    ')
FROM TableA

If you are using time slices in ADF and you have your query inside of the Text.Format() function, you will find that ADF doesn’t allow the single quotes around the four spaces in your JSON. You can instead use CHAR(32) instead of a space. If you have a better way of accomplishing this, please leave me a note in the comments.

SELECT
    [Column1],
    [Column2],
    [Column3] = REPLACE(
                  REPLACE(
                    CAST([Column3] AS varchar(8000)), 
                      CHAR(13), CHAR(32) + CHAR(32) + CHAR(32) + CHAR(32)), 
                      CHAR(10), CHAR(32) + CHAR(32) + CHAR(32) + CHAR(32))
FROM TableA

In addition to updating the ADF pipelines, I also had to replace the newlines in my existing files in blob storage. Since there weren’t many of them, I just opened them up in Notepad++ and did a find & replace. If there had been more files, I would have looked into a more automated solution.

If the ability to allow field/row terminators within string fields is something you would like to see in the PolyBase, please voice your opinion by casting a vote on the feedback site.

Using Context To Traverse Hierarchies In DAX

My friend and coworker Melissa Coates (aka @sqlchick) messaged me the other day to see if I could help with a DAX formula. She had a Power BI dashboard in which she needed a very particular interaction to occur. She had slicers for geographic attributes such as Region and Territory, in addition to a chart that showed the percent of the regional total that each product type represented.  The product type was in the fact/data table. Region and territory were in a dimension/lookup table and formed a hierarchy where a region was made up of one or more territories and each territory had only one region.Territory Hierarchy

The requirement was that if a user selected no regions and one territory, the chart should show the % contribution of each product type for the region in which the territory is located.

PBI Undesired Behavior

Back in the MDX days this would be easy because we could go up the hierarchy (assuming the hierarchy is called Geography) using something like [Geography].[Geography].CurrentMember.Parentor Ancestors([Geography].[Geography].CurrentMember,1).

I miss MDX

DAX doesn’t have this same capability, so you have to approach it a bit differently.

I made a quick demo model to mimic the circumstances.

  • The Geography table contains an ID as well as Region and Territory.
  • The Product table contains and ID and Product Name.
  • The Product Type is actually in the fact/data table along with the Sales Amount.
  • The relationships between the tables are many:1 from the Sales table to the lookup tables, both with cross filter direction set to single.

The ultimate calculation we wanted was <regional product type sales amount> / <region sales amount total for all product types>.

The initial calculation we started with was:

Percent Of Region Total :=
CALCULATE (
 SUM ( 'Sales'[Sales Amount] ),
 ALLSELECTED ( 'Geography'[Region] ),
 ALL ( 'Geography'[Territory] )
)
 / CALCULATE (
 SUM ( 'Sales'[Sales Amount] ),
 ALL ( 'Sales'[Product Type] ),
 ALLSELECTED ( 'Geography'[Region] ),
 ALL ( 'Geography'[Territory] )
 )

Notice that the numerator and denominator are the same, except the denominator overrides the query context for Product Type. This calculation worked fine when a single region was selected and no territories were selected. Once a territory is selected, it limits the scope to just that territory rather than the entire region, which is not desired.

In order to make sure I understood the query and filter context, I added two calculated measures:

TestFilters := IF ( HASONEFILTER ( 'Geography'[Region] )"1""Many" )
TestValues:= IF (HASONEVALUE('Geography'[Region] )"1""Many" )

I put these measures in a pivot table along with my product types and my geography slicers, and I selected no regions and one territory. The TestFilters measure returned “Many”, but the TestValues measure returned “1”. So I had my first clue as to how to find my desired region.

Filters/Values Test

Next I worked on my numerator in my final calculation. I knew my measure would include the sum of Sales Amount, so I started with that and added CALCULATE() to it in order to change the context. I needed to remove the territory context so I started with Calculate(Sum('Sales'[Sales Amount]), ALL('Geography'[Territory]))

Then I needed to group those results by region and filter to only the desired region, which I did using MAXX(VALUES(Geography[Region])). If I selected a single region or a single territory, I would only get one region back in the values. MAXX() was used to retrieve a single result rather than a table. My numerator ended up as:

Region Contr Numerator :=
MAXX (
VALUES ( Geography[Region] ),
CALCULATE ( SUM ( Sales[Sales Amount] ), ALL ( Geography[Territory] ) )
)

This way, the context of the region corresponding to the selected territory is saved outside of the CALCULATE() in the VALUES(‘Geography'[Region]), but it was removed inside the CALCULATE() to provide the region total. This formula works when no territories and one region is selected as well as when no regions and one territory is selected.

The denominator is written similarly with the addition of the ALL(‘Product'[ProductType]) filter inside of the CALCULATE().

Region Contr Denominator =
MAXX (
VALUES ( Geography[Region] ),
CALCULATE (
SUM ( Sales[Sales Amount] ),
ALL ( 'Sales'[Product Type] ),
ALL ( Geography[Territory] )
)
)

To finish up, I created my final calculation and added a bit of error handling for cases when no regions or territories are selected:

Region Avg =
IF (
HASONEVALUE ( 'Geography'[Region] ),
DIVIDE ( [Region Contr Numerator], [Region Contr Denominator] ),
BLANK ()
)

These calculations provided the desired result.

Region Contribution Final

Note that this only works with the cross filter direction set to single in the relationships. Also, originally I had a Product Type attribute in my Product table, used DAX to add it to my Sales table (to replicate the situation we were solving for) , and then forgot and put the Product Type from Product in my visuals. This clearly didn’t work because I didn’t set my context in the calculations using the field from the Product table.

Hopefully this will come in handy if you have some interesting filter/slicer requirements in your reports.

End of the Power BI Updates List

I just finished updating my Power BI updates list for the last time.  I started the list in November as a way to keep track of all the changes. When I first started it, they weren’t posting the updates to the blog or the What’s New sections in the documentation. Now that Microsoft is on a steady cadence of monthly releases and doing a great job of communicating updates, there isn’t much need for this except as a fun data source. It’s also been about a year since Power BI V2 became generally available, so it seems like a good stopping point.

I removed the link to my updates page, but I left the page available for anyone who has it bookmarked and put a note on the page that the list is no longer being updated. I want to leave it in place because I appreciated the comments and the Power BI visuals people made using the list as a data source.

If you would like a copy of it to continue your own list or to use as a data source, you can access the file at http://1drv.ms/21Kjk3f.

BimlScript – Get to Know Your Control Nuggets

This is post #2 of my BimlScript – Get to Know Your Code Nuggets series. To learn about text nuggets, see my first post.  

The next type of BimlScript code nugget I’d like to discuss is the control nugget. Control nuggets allow you to insert control logic to determine what Biml is generated and used to create your BI artifacts (packages, cubes, etc.).  Control nuggets can be used to:

  • Add conditional logic so that specified Biml fragments are only generated when certain criteria are met
  • Add loops to repeat Biml fragments multiple times
  • Define variables that will be used later
  • Access external data from databases, flat files or other sources that will be used in the generation of the final Biml code

Control nuggets start with <# and end with #>. Just like text nuggets, control nuggets can be a single line or multiple lines. And they can contain simple or complex logic.

There were actually control nuggets in the text nugget example from the previous post. The variable declarations at the top of the file (lines 4 – 10) are control nuggets.

Below is another example file that uses code nuggets. I like to separate my design patterns. project connections, and package generation into separate files. The BimlScript shown below is from a slightly altered “caller file”, the file that I would execute to create the SSIS packages that should follow a specified design pattern. This caller file is generating Type 1 slowly changing dimensions.

In my Biml framework, I store the data I need in order to generate my packages in SQL tables. You can see the control nugget in lines 14 – 40 retrieving data from my database and storing it in a data table for later use.

On line 46, there is a control nugget containing a for each loop. For each row in the data table, it calls a BimlScript file that creates a package that uses my Type 1 SCD design pattern with the variables from data table. Notice that the end curly brace for my loop is in a separate control nugget.

There are many different ways to use control nuggets that aren’t covered here.  From these two examples you can start to see how I might combine text nuggets and control nuggets to automate my SSIS package creation and employ consistent design patterns. But there are still a few missing pieces that will be filled in when I cover the remaining code nugget types.

BimlScript – Get to Know Your Code Nuggets

gold nuggetsIn BimlScript, we embed nuggets of C# or VB code into our Biml (XML) in order to replace variables and automate the creation of our BI artifacts (databases, tables, SSIS packages, SSAS cubes, etc.). Code nuggets are a major ingredient in the magic sauce that is meta-data driven SSIS development using BimlScript.

There are 5 different types of code nuggets in BimlScript:

  • Text nuggets
  • Control nuggets
  • Class nuggets
  • Comments
  • Directives

Over the next several posts I’ll cover each type of code nugget and provide examples of their use.

Text Nuggets

Text nuggets evaluate the expression they contain and then replace the text nugget with the string representation of the value of the expression. I use them often to switch out names of packages, tasks, and components as well as source and destination tables in SSIS development when creating packages based upon a design pattern.

Text nuggets start with <#= and end with #>. Notice there is an equals sign at the beginning of the text nugget but not at the end.

Text nuggets are very useful. You can include complex business logic in the expressions. And the expression result can be any data type. The BimlScript compiler will automatically convert it to text before replacing the code nugget with the result. Like all code nuggets, text nuggets can be a single line or multiple lines.

Let’s take a look at some examples.

Note: The Biml script below has variables, connections, and a package all in one file for ease of demonstration. Normally I would split these out into separate files. 

This BimlScript creates an SSIS package that truncates and reloads data in a single table.

Notice that you can use the text nuggets alone, together, or with literal text.

The <#=PackageName#> text nugget simply replaces the value for the name of the package with the current value of the PackageName variable I set at the top of the file.

The DataFlow Task name is a combination of text and a code nugget.

<Dataflow Name="DFT Stage_<#=DestinationTable#>">

This comes in handy if you would like to enforce certain naming standards. My personal preference is to use naming conventions suggested by Jamie Thomson, so I build them into my Biml and then use text nuggets as shown above to change the rest of the name to reflect the nature of the current task.

The truncate table statement in the Execute SQL Task contains text and two code nuggets that reference a variable and use a function to convert the variable value into all uppercase text:

Truncate Table [<#=DestinationSchema.ToUpper()#>].[<#=DestinationTable.ToUpper()#>];

Hopefully you can see how text nuggets are useful when creating multiple packages at the same time. They allow you to switch out the values that change for each package and create consistent naming standards.

To get to know other types of code nuggets, see Get To Know Your Control Nuggets.

My PASS Summit Abstract Feedback

I wasn’t selected to speak at the PASS Summit this year. One of my talks was selected as an alternate, but I’m not holding out hope it will move up. While I am a bit disappointed, I’m ok with it.  I delivered a general session and a lightning talk at PASS Summit 2014, then took a year off and didn’t submit anything in 2015. But I was on the abstract review committee last year and this year.  It’s interesting to see how abstracts are ranked and feedback is provided. I have a few quick thoughts on not being chosen:

  1. Congrats to everyone who was chosen! I am happy for you, and I hope you enjoy the experience.
  2. My career and my confidence as a speaker are just fine. If my perception of my career/speaking success rested solely/mostly on being selected to a single conference, I would probably need to re-evaluate my goals and priorities. I have learned a lot this year, and I’ve had some fun speaking opportunities so I’m satisfied with my accomplishments in the first half of 2016.
  3. Although my attendance at PASS Summit 2016 is TBD, part of me is happy that if I do get to attend, I will do so without having to worry about speaking. It’s nice to enjoy the company of others and stay out late without stressing over last-minute tweaks to demos or being tired while speaking.

I think it’s helpful for me to see others’ abstracts that were submitted and the feedback they received, so I’m sharing mine. It can be difficult to tell sometimes if you are missing something or if it was just luck/circumstances (there happened to be an abnormal amount of good abstracts on your same topic, etc.). While I can’t talk about the program committee or the criteria we used for judging abstracts, I will point out that if you look at the feedback you will see some common themes that may provide guidance for future abstract submissions.

I’ve listed the abstracts I submitted below along with the feedback they received as well as my thoughts.

Help! Someone Got Slowly Changing Dimensions in my Tabular Model! (Not Chosen)

Category: General Session
Track: BI Platform Architecture, Development & Administration
Topic: SSAS – Advanced Tips and Tricks
Level: 300
Session Abstract: Many developers shy away from slowly changing dimensions that capture history because they seem complicated, but they can be useful and worth the effort. We often need to see what an entity looked like at the time of an event rather than its current attributes; e.g., a customer’s marital status, address, and age at the time of purchase is useful for analyzing buying patterns. Having this historical view makes our analytics more accurate and useful. Many people can figure out how to add slowly changing dimensions to our SQL Server database. But what do we do when we get to our SSAS Tabular Model? It’s not quite as easy as trading the surrogate key for the natural key in your formulas. We’ll examine the mechanics of Type 2/6 slowly changing dimensions implemented in SQL Server. Then we’ll discuss good practices for adding slowly changing dimensions to the tabular model. Finally, we’ll review common DAX formulas and see how we should alter them to accommodate slowly changing dimensions.
Prerequisites: Basic familiarity with the idea of slowly changing dimensions, understanding of Tabular SSAS) and common DAX calculations.
Goals:

  • Define and explain the types of slowly changing dimensions
  • Provide tips on how to optimize loading of a type 2 or 6 slowly changing dimension
  • Demonstrate common calculations and how to change them to make them work with a type 2 or 6 slowly changing dimension
Comments
  • Sounds like it could be a good session but the abstract seemed all over the place. Hard to follow.
  • abstract seems to ramble. grammatical anomalies. punctuation misuse.
  • Fantastic topic, and extremely well-constructed abstract!
  • Well written abstract, sold me in easily. Seems to be on level. Clear goals.
  • Nice abstract with clear goals and outlines.
  • Abstract OK. Learning goals a bit roughly defined – could be more precise in this narrow topic.
  • Good abstract. Could be an interesting session to attendees.

My thoughts: There are some conflicting reviews, which is to be expected from a diverse group of reviewers. One of the reviewers seems to be extremely picky about grammar. I have reviewed the abstract, and I still do not see any grammatical issues. I am rather picky about grammar myself, but I think the goal of reviewing grammar in abstracts is to ensure that the abstract will be easily understood by attendees. As long as the abstract sounds professional and appropriately communicates what the session is about, tiny grammar issues and preferences don’t matter.

How to Care for Your Type 2 Slowly Changing Dimension (Not Chosen)

Category: Lightning Talk
Track: BI Platform Architecture, Development & Administration
Topic: SSIS – Building and Deploying Solutions
Level: 200
Session Abstract: Your type 2 slowly changing dimension (SCD2) needs love and proper care. Don’t over complicate your relationship with your SCD2, but also don’t let it feel unappreciated. Be straightforward and set ETL expectations appropriately. Be accepting if your type 2 slowly changing dimension decides he wants to be a type 6. This talk will help you understand your SCD2 and how to provide for him using SQL Server Data Tools and SSIS.
Prerequisites: High-level grasp of type 2 dimensions used to capture effective dated historical attributes. SSIS dev expertise to understand design patterns.
Goals:

  • Define a type 2 slowly changing dimension and how it works
  • Show example table DDL for a Type 2 slowly changing dimension
  • Show an example SSIS package with a good design pattern for loading a type 2 slowly changing dimension
Comments
  • can this really be covered in 10 minutes?
  • Misalignment in abstract, learning goals and level
  • Cute abstract. Title matches the abstract well; goals well defined and aligned. Would be a good lightning talk; level might be better suited for level 100
  • Really good abstract for a lightning talk. Sounds like a topic that a good amount of people and sounds like something I would enjoy attending.
  • Abstract is well written and easy to understand.
  • Good topic but not sure about whether the subject matter can effectively be presented in the time allotted.
  • Very focused topic – great for a lightning talk. Very good prerequisites and goals. The abstract is entertaining yet clear.
  • Well written abstract, on level
  • Well written and to the point.

My thoughts: I had fun writing this abstract, and I look forward to delivering this talk somewhere else. There seems to be some debate about whether slowly changing dimensions are a 100-level topic for SSIS. I think they are not. Whoever wrote that probably underestimates their own knowledge and has forgotten what SSIS basics at the 100 level are like. I think it would be a challenge to deliver in 10 minutes, and that is very valid feedback. I would have liked to try it though. Lightning talks are tough. They require a lot of editing and practice to pack useful, interesting, and coherent info into 10 minutes.

Overcoming Chartaphobia with Power BI (Alternate)

Category: General Session
Track: BI Information Delivery
Topic: Power BI
Level: 100
Session Abstract: Do reports in your organization consist mostly of giant tables of data? Perhaps you have gotten as far as adding KPI indicators or conditional highlighting to the tables. Maybe you have charts, but they are hideous and distracting. Although tables of data presented as reports are commonly found in many organizations, they may be doing you and your users a disservice. We’ll discuss why cognitive psychological studies tell us that graphs and pictures are more effective at communicating trends and comparisons and how to prepare to create good data visualizations. Then we’ll explore how to employ purposeful data viz designs to help users achieve their goal of making informed decisions, using a fun and useful Power BI dashboard. You’ll leave with guidance on how to take boring or unreadable tables of data and turn them into useful and visually appealing reports.
Prerequisites: Interest in data viz and/or Power BI
Goals:

  • Understand the right questions to ask when preparing to create a data viz solution
  • Explain 4 tips for effective data viz design
  • Demonstrate effective data viz design in a Power BI report and dashboard
Comments
  • Well written abstract and concise goals. I’m not sure if this will be of great interest
  • Don’t care for the data “viz” usage. This should be spelled out to visualization.
  • Whilst I get why the title might have chartaphobia in it – it still didn’t sound right. It’s not an actual word. The abstract is good, it does tell me that I may have charts but they’re “hideous and disgusting”. This didn’t sound nice and might turn some audience members away. The topic of Power BI is a good topic, the delivery just needs some work for a 100 level – explain earlier in the abstract how Power BI will make the charts better or exist at all.
  • Abstract: The level would be 200 Objective: I would like to attend this session.
  • I would’ve liked the goals to be a little more thorough.

My thoughts: I can’t imagine anyone being confused by the use of the word viz, but it can be a good rule of thumb not to include abbreviations or at least explain them. The comment about “chartaphobia” not being a real word made me laugh, but perhaps it doesn’t work well with an international audience. I would have put it in quotes, but I knew Orator tool we use doesn’t display those characters correctly. We all use different tactics to create catchy titles. I didn’t coin that phrase, but I think we all understand it isn’t a “real” word. I give this talk at SQL Saturdays and user groups and it goes over very well, so I’m not worried about the content of the session.

Building an Effective ETL Framework in SQL Server (Not Chosen)

Category: General Session
Track: BI Platform Architecture, Development & Administration
Topic: BIML
Level: 300
Session Abstract: An effective ETL framework helps you start and complete SSIS projects faster, employ reusable design patterns, and be consistent across packages and projects, lowering the learning curve for junior developers and your support team. Biml (Business Intelligence Markup Language) automates your SSIS design patterns and eliminates the manual repetition of solving familiar problems and making the same update over and over in similar packages. In this session, we’ll discuss and identify SSIS design patterns, learn how to create and automate them using BIML, and then see a demo of a package execution framework built using BIML.
Prerequisites: Basic proficiency in SSIS. Ability to read/understand XML and C# is helpful but not at all required.
Goals:

  • Provide a basic overview of BimlScript, how it works, and syntax for creating SSIS packages
  • Understand the usefulness of SSIS design patterns, identify common patterns, and practice abstracting details from a package to understand the pattern
  • Create a master package with an easily extensible execution framework, using BIML
Comments
  • Sounds like a good session.
  • Basic and 300 don’t coincide. Need to decide is it an advanced topic or not.
  • nice abstract
  • Good topic. More detail in the abstract about what will be covered would be helpful – include more of what was listed in the goals. Based on the abstract, the level seems too high, but it seems right based on the goals.
  • Seems like a lvl 200 session. Not the most interesting BIML abstract I’ve read.
  • Not sure I can identify the level 300 material in the abstract. Would recommend as level 200. Great story line and an important topic to teach; design patterns.
  • Well-developed outline and goals with details. The title may be clearer by adding BIML in it.
  • Good but average abstract. Could be an interesting session to some attendees.

My thoughts: This wasn’t my most inspired abstract writing. I agree it probably was more of a 200-level session than 300. I marked it as such because this session requires you to put all the pieces together and understand architecture more than just how to write Biml to make it do stuff.  In previous years there was no Biml track, so this year’s chosen abstracts will help me gauge how to write for this topic next year. I’m sure this session will continue to evolve as I deliver it more, and maybe I’ll come up with a better title and description.

Who needs SSAS when you have SQL? (Not Chosen)

Category: General Session
Track: BI Platform Architecture, Development & Administration
Topic: SSAS – Building and Deploying Solutions
Level: 200
Session Abstract: Analysis Services may seem foreign or unnecessary for SQL and .NET developers. However, it can offer many advantages for reporting and data exploration, even with the SQL engine’s latest indexing and in-memory analytics capabilities. In this session, we will cover useful features of SSAS and discuss conditions where it is beneficial. Next we’ll compare the two types of Analysis Services databases (Multidimensional and Tabular) and identify requirements that should influence your decision of which type is right for your solution. Then we will explore common ways to retrieve data and browse your SSAS database, such as Power BI and Excel, reporting tools like SSRS, and custom .NET applications.
Prerequisites: Basic familiarity with data warehousing, interest in learning more about SSAS
Goals:

  • Identify the conditions in which SSAS is preferable to using just the SQL Engine
  • Explain the differences between SSAS multidimensional and tabular
  • Demonstrate various ways of browsing and querying an SSAS model
Comments
  • Title a bit misleading. Was expecting a SSAS bashing, but pleasantly surprised with the meat of the abstract. Learning goals in line with level and abstract.
  • concise engaging abstract
  • Demo percentage seems low for such a topic but, overall, looks like a good session.
  • Good abstract. Sounds like a good session.
  • The idea presented in the introduction would be a good session, but it seems like the abstract strays from that path and into a general discussion of how to use SSAS. The abstract is well-constructed otherwise, and it effectively conveys the goals stated. The title seems to present the opposite viewpoint of the introduction to the abstract.
  • Well outlined abstract but if it had more demo, it would sound more interesting and persuasive.
  • Interesting and descriptive abstract. Seems to match level and topic.

My thoughts: This was some of the most helpful feedback I received. I don’t remember exactly what percent I said was demo, but the session could probably be redesigned to include more demos. And I have given this session before and received feedback that they felt the title was misleading. I will consider renaming the session in the future.