Consulting

So You Want to Be A Business Intelligence Consultant

I’ve had a few people ask me recently for advice on getting into business intelligence/analytics consulting. I have been in consulting for 6+ years and have worked for 3 different consulting firms, so I’m sharing my thoughts and experiences here in case they are helpful to someone else. My first response to people asking this question is usually to clarify what they mean by consulting.

What do you envision when you think about consulting?

Female consultant standing with laptop

Do you want to be an independent consultant, work through a placement agency, or work as a full-time employee (FTE) for a consulting firm?

Life looks a little different based upon your answer to this question. I have chosen to work for a consulting firm for a few reasons:

  • I like the stability of a salary and paid time off (vacation, holidays, sick days)
  • I don’t have to do my own accounting/taxes/collections
  • I’m not solely dependent on my sales efforts to bring in work
  • I like having smart coworkers that are in the trenches with me

If you are independent, you are responsible for everything. You must take care of (do yourself or outsource) your own accounting, legal, taxes, collections, sales, etc. Even if you outsource some of these functions, you need to stay somewhat involved – you can’t just throw your financial documents over the fence and wipe your hands of them.

And if you aren’t doing billable work and getting the client to pay you, you aren’t getting paid. This means you need some financial reserves to cover slow months.

My independent consultant friends tell me that sales takes up much more time than they had initially anticipated. They are always working to line up that next engagement to avoid having large gaps between projects. It takes work to build up a sales pipeline. Sometimes you can mitigate this by teaming up with other independent consultants or companies to have a lead referral agreement. They can refer clients to you when they need help in your area of expertise. But there is usually no guarantee of work from these other parties, and sometimes they take a cut of the revenue for sending the work to you. You can outsource a lot of things, but sales is rather difficult because you need the salesperson to understand what you do, the value it provides to clients, and what your values and preferences are.

Another issue you face as an independent consultant (and sometimes as a consultant for a very small consulting company) is how to get interesting projects that require multiple people. Let’s say you have found a potential client that wants to do some really cool analytics, but the project requires three people to work together. You’ll need to have relationships with other consultants with whom you can team up to work the project. And even if you have good people you can work with in general, the work needs to fit into their schedules .

As an FTE, I am still responsible for some sales efforts, but we also have a salesperson and other employees to help. And if I don’t sell anything, I still get paid. This is a great thing for my mortgage and my stress levels.

But as an independent consultant, you also are your own boss. You get to make all of the decisions about how and where to spend your time, what office and tech supplies to purchase, what your website looks like, how much vacation you can take, and who you want to do work for. Want to take a month off to travel the world or spend time with your family? You don’t need anyone else’s permission to do that. But you won’t be getting paid if you aren’t working. One way to mitigate this is to create content that can be sold so you have some passive income in addition to your normal consulting engagements. An example of this is creating a business intelligence course and selling it on your website or through an online learning/education company. You could continue to get revenue/royalties as people purchase the course. But do your homework to get realistic expectations about how much extra income this would mean relative to the time it takes to create and update the content.

If you want to be a contractor where you do full-time, often 3- to 12-month, engagements with a company, you can get your own work or you can go through placement firms. Some placement firms offer benefits and some guarantee of work in exchange for a share of the income resulting from your placement. Others take a flat fee (either from you or from the hiring company). This type of consulting is usually paid based upon an hourly rate.

Do you want to be a consultant who is more project-based where you have a specified scope and budget, and you may work for multiple clients at one time? Or do you want to be embedded full-time in an organization?

All consultants/contractors must complete (most of) the tasks and projects they take on, but managing scope and budget are special skills. Some people enjoy that challenge while others do their best to avoid it. If you are essentially a contractor who spends your time with a single client and does whatever comes up, you may have to spend less time and effort managing scope and budget. If your work is more project-based, this becomes an important part of your work.

If you do project-based consulting, you may occasionally have to work with multiple clients at once. This means you have to do a lot of context switching. I have had days where I worked on a Power BI sales report makeover for Client A, helped a coworker with some DAX over lunch, and built a data warehouse with product inventory data in Azure SQL DW in the afternoon for Client B. Clients don’t really think about what we do when we aren’t working for them. They are expecting us to mentally pick right back up where we left off when we get back to them and remember all the project and organizational details. I rather enjoy that variety, but I know that it gets stressful and isn’t a good fit for everyone.

Another aspect of project-based work is the amount of time you get to stay on with a client. Some consultants focus on implementations or health checks and then move on. They aren’t there 6 months later to see how things are going with that new system they implemented, unless the client calls them back. There is good and bad in this situation. The consultant may get a lot of experience with building systems/performing health checks in a short amount of time compared to an FTE not in a consulting role. But they may not be around to understand and experience the consequences of their design, unless they client calls them back. You can always work out retainer or support agreements to hedge this a bit, but it is up to your organization and the client to do so.

Project-based work can also mean you have less stability in your life. It’s difficult to anticipate exactly when an engagement will start or end, when you will need to travel, which nights you may need to work late. But project-based work can also offer exciting opportunities, a great variety of technical work, and the opportunity to learn about different companies and industries within a short amount of time.

What would work best for you?

There isn’t a single right answer to these questions. It’s about what would give you the income, benefits, lifestyle, and type of work that you want.

Azure, Azure SQL DW, Data Warehousing, Microsoft Technologies, T-SQL

What You Need to Know About Data Classifications in Azure SQL Data Warehouse

Data classifications in Azure SQL DW entered public preview in March 2019. They allow you to label columns in your data warehouse with their information type and sensitivity level. There are built-in classifications, but you can also add custom classifications. This could be an important feature for auditing your storage and use of sensitive data as well as compliance with data regulations such as GDPR. You can export a report of all labeled columns, and you can see who is querying sensitive columns in your audit logs. The Azure Portal will even recommend classifications based upon your column names and data types. You can add the recommended classifications with a simple click of a button.

You can add data classifications in the Azure Portal or via T-SQL or PowerShell. Data classifications are database objects.

ADD SENSITIVITY CLASSIFICATION TO
    dbo.DimCustomer.Phone
    WITH (LABEL='Confidential', INFORMATION_TYPE='Contact Info')

To view existing data classifications, you can query the sys.sensitivity_classifications view or look in the Azure Portal.

SELECT
sys.all_objects.name as [TableName], 
sys.all_columns.name as [ColumnName],
[Label], 
[Information_Type], 
FROM sys.sensitivity_classifications
left join sys.all_objects on sys.sensitivity_classifications.major_id = sys.all_objects.object_id
left join sys.all_columns on sys.sensitivity_classifications.major_id = sys.all_columns.object_id
    and sys.sensitivity_classifications.minor_id = sys.all_columns.column_id

Be Careful When Loading With CTAS and Rename

One issue that is specific to using data classifications in Azure SQL DW is that it is possible to inadvertantly drop your classifications when you are loading your tables using the recommended T-SQL load pattern. Typically, when using T-SQL to load a dimensional model in Azure SQL DW, we perform the following steps:

  1. Create an upsert table via CTAS with the results of a union of new data from a staging table with existing data from the dimension table
  2. Rename the dimension table to something like Dimension_OLD
  3. Rename the upsert table to Dimension
  4. Drop the Dimension_OLD table
Animation of a table load process in Azure SQL DW


In the animation above, you’ll first see the load process as described, and then it will replay with sensitivity labels added to the dimension table. You’ll see that they are dropped when we drop the old dimension table. This makes sense because sensitivity classifications are objects related to that table. We would expect an index to be dropped when we drop the related table. This works the same way.

Check out my SQL notebook for a demonstration of the issue as well as my workaround that I describe below. If you spin up an Azure SQL Data Warehouse with the sample database, you can run this notebook from Azure Data Studio and see the results for yourself.

There are a few complicating factors:

  • There are currently no visual indicators of sensitivity classifications in SSMS or Azure Data Studio.
  • ETL developers may not have access to the data warehouse in the Azure Portal to see the sensitivity classifications there.
  • The entire process of adding and managing sensitivity classifications may be invisible to an ETL developer. A data modeler or business analyst might be the person adding and managing the sensitivity classifications. If the ETL developer isn’t aware classifications have been added, they won’t know to go and look for them in the sys.sensitivity_classifications view.
  • SSDT does not yet support sensitivity classifications. The only way I have found to add them into the database project is as a post-deployment script with the build property set to none.

The good news is that you can add the sensitivity classifications back to your dimension table using T-SQL. The bad news is still that the ETL developer must remember to do it. My workaround for now is a stored procedure that will do the the rename and drop of the tables plus copy the sensitivity classifications over. My hope is that it it’s easier to remember to use it since it will do the rename and drop for you as well.

Update: Someone asked about the name SwapWithMetadata and why it doesn’t specifically mention sensitivity classifications. I didn’t mention classifications because there are other things that need this same treatment. Dynamic data masking will also need to be reapplied. With dynamic data masking, it will be even more important to add it back immediately after swapping the tables rather than waiting for a full data load of all selected tables to finish and adding all classifications back. If your load takes a long time or the process fails on another table, you don’t want your data exposed without a mask to users who shouldn’t see the full information.

Eventually, the tools will be updated to provide more visibility to data sensitivity classifications, but we still need to make sure they don’t get dropped.

For now, my recommendation is if you are going to go in and add a lot of sensitivity classifications, that you create a user defined restore point immediately after so that you know you have them in a backup somewhere. Azure SQL DW doesn’t do point-in-time restores the way Azure SQL DB does. It takes automatic restore points every 8 hours or so. So if someone went through the trouble of adding the sensitivity classifications and they were dropped through the data load process, there is no guarantee that you could use a backup to get them back.

Vote for My Enhancement Idea

If you would like Microsoft to add something to the product to keep sensitivity classifications from being dropped, or at least make it easier to add them back, please vote for my idea.

Not an Issue with Other Data Load Methods

Please note that if you are using other tools or methods to load your tables where you don’t swap them out, you won’t have the issue of dropping your sensitivity classifications. But I wanted to bring up this issue because I can see people spending a lot of time adding them and then suddenly losing them, and I want everyone to avoid that frustration.

Give Data Classifications a Try

I think data classifications are a good addition to SQL DW. Anything that helps us efficiently catalog and manage our sensitive data is good. I have added them in my demo environment and hope to use them in a client environment soon.

Have you tried out data classifications in SQL DW or DB? What do you think so far? If not, what is keeping you from using them?

Data Visualization, Microsoft Technologies, Power BI

Turning a Corporate Color Palette into a Data Visualization Color Palette

Last week, I had a conversation on twitter about dealing with corporate color palettes that don’t work well for data visualization. Usually, this happens because corporate palettes are designed with websites and/or marketing collateral in mind rather than information graphic design. This often results in colors being too bright, dark, or dull to be used together in a report. Sometimes the colors aren’t easily distinguishable from each other. Other times, the colors needed for various situations (main color, ancillary colors, highlight color, error color, KPIs, text, borders) aren’t available in the corporate palette.

You can still stay on brand and create a consistent user experience with a color palette optimized for data visualization. But you may not be using the exact hex values as defined in the corporate palette. I like to say the data viz color palette is “inspired by” the marketing color palette.

I asked on twitter if anyone had a corporate color palette they needed to convert into a data visualization palette, and someone volunteered theirs. So this post is my walk-through of how I went about creating the palette.

Step 1: Identify a Main Color

There is often a main color in the corporate color palette. If that color is a medium intensity color, I usually include that color in my color palette as is. If it is excessively dark, light, or gray, I’ll either tweak the color a bit or use the second color in the color palette.

Step 2: Choose a Color Scheme

Next, I need to decide what kind of relationship the other colors will have with the main color. In other words, I have to decide what type of color scheme I want to use. I tend to go for monochromatic or analogous color schemes. Complimentary color schemes can be difficult, depending on your main color. I generally try to stay away from using reds and greens together in the same palette because it’s hard to stay colorblind-friendly and because the primary colors together can make it feel like a Christmas or kindergarten theme. I often try to reserve reds and oranges to draw attention to specific data points, but that isn’t a hard and fast rule.

I need 2 – 4 ancillary colors to go with my main color. I rarely need to use all 4 colors together in one chart, but there are some cases such as line charts with 4 series where that will be necessary. People can preattentively distinguish up to about 7 colors at once, so I need to use fewer than 7 colors in a single chart. If I encounter a situation where I feel like I need more than 4 colors together, I re-evaluate my choice of chart type and my use of color within the chart.

Also, I want the colors to be roughly the same level of brightness and intensity. Most importantly, the colors need to be easily distinguishable from each other.

RGB Color Wheel

Step 3: Choose Highlight and Error Colors

We often need to draw attention to specific data points to indicate that they require attention. This is usually because a value is outside of the expected range. KPIs are common in Power BI reports, I need to make sure I have a color to indicate “bad” statuses. I also like to have a highlight color that doesn’t necessarily mean “bad”, just “look here”. These highlight and error colors need to be noticeably different from my other colors so that they draw attention to the data points where they are used.

Step 4: Add Border and Background Colors

I like to add grays and browns to go with my color scheme. I’ll use them mostly for borders, grid lines, text, and light background shades. But also, I want to make sure I have 8 colors in my palette. If I have fewer than 8 colors, Power BI will add colors from the default palette at the end of my colors to fill out the full 8 columns.

Color Palette Creation Example

The original corporate color palette that I was given had a lot of colors.

Primary Corporate Colors
Secondary Corporate Colors

The primary colors go all the way around the color wheel. I definitely don’t want to use them all together. The secondary colors have the beginnings of a monochromatic blue palette, an analogous blue/green palette, or an analogous orange/red/purple palette.

I don’t need all of these different hues. I need 8 medium-intensity colors. Power BI will add black and white and provide the shades and tints for me.

Main Color

I’m keeping the main color as it is. It is bright and saturated enough to not be dull/boring and also not so bold as to leave no room for bolder colors to be used to highlight specific data points.

Chart using the main color of my palette

Color Scheme

I choose an analogous color scheme, which means I pick colors that are next to my main color on the color wheel. Since blue is my main color, I stick with cool colors for the ancillary colors.

Main color plus 3 ancillary colors

I want my 4 colors to be easily distinguishable from each other, and I want them to be roughly the same intensity and brightness.

Highlight and Error Colors

I’m adding yellow and red to my palette. The yellow can be a generic highlight color as well as a “caution” color. The red can be my “bad” color. I’m checking that my colors are easily distinguishable for various types of color vision deficiency.

I confirm that my highlight and error colors are easily distinguishable from the other colors for the most common types of color vision deficiency. I can also see here that my second and fourth colors look a bit similar on the deuteranopia line, so I’ll have to be careful how I use them together, perhaps switching to a shade or tint of the fourth color if needed.

Border and Background Colors

Now I add my grays and browns to use for formatting. This completes my color palette.

Full color palette for Power BI Theme

Power BI Theme

I can take the hex values for my colors and drop them in the color theme generator on PowerBI.Tips to get my JSON theme file.

{
"name":"Analogous",
 "dataColors":["#00aaff", "#00c2b2", "#213dca", "#7514ff", "#ffd500", "#ff002b","#768389","#987665"]
 }

When I import my theme file into my Power BI report, I get the additional tints and shades from the colors I provided.

Next I try out my new color theme in a report to see if I need to tweak any colors. This is the true test. The colors may look great in little boxes, but they might need to be altered to work on a full report page. The shade of purple that I used originally (not shown in this blog post) was a bit too intense compared to the other colors, so I replaced it with a slightly muted tint that better matched the other colors. That is the type of thing you will notice when applying your theme to a report. Don’t get too stuck on finding the exact perfect colors. Colors look slightly different on different screens. Just make sure nothing is inadvertently distracting.

Helpful Color Tools

I’m currently using https://color.mediaandme.be to create my color palettes. It’s free, and it allows me to add many (> 6) colors to my palette. Other benefits:

  • It shows me what all the colors look like together
  • It provides a colorblindness simulator
  • It lets me easily tweak hue, saturation, and brightness
  • It generates a link for the color palette I create so I can easily share it with others for feedback

When I need ideas for how to tweak a color, I use https://www.colorhexa.com. I picked the gray color in my palette by getting the grayest tone of my main color from ColorHexa.

Further Reading on Color in Data Visualization

Tiger Color: Basic Color Schemes – Introduction to Color Theory
Maureen Stone: Expert Color Choices
Billie Gray: Another post about colours for data visualisation. Part 3 — DIY Palettes
Christopher Healy: Perception in Visualization

Data Visualization, Microsoft Technologies, Power BI

Storytelling Without Data?

There are many great resources out there for data visualization. Some of my favorite data viz people are Storytelling With Data (b|t), Alberto Cairo (b|t), and Andy Kirk (b|t). I often reference their work when I present on data visualization in the context of the Microsoft Data Platform. Their work has helped me choose the right chart types for my data and format it so it communicates the right message and looks good. But one topic I have noticed that most data visualization experts rarely address is the question I get in almost every presentation I give:

How do I tell a story with data when my data is always changing?

If you are a BI/report developer, you know this challenge well. You may follow all the guidelines: choose a good color palette, make visuals that highlight the important data points, get rid of clutter. But what happens when your data refreshes tomorrow or next month or next year? It’s much easier to make a chart with static data. You can format it so it communicates exactly the right message. But out here in Automated Reporting Land, that is not the end of our duties. We have to make some effort to accommodate future data values. Refreshing data creates issues such as:

  • We can’t put a lot of static explanatory text on the page to help our audience understand the trends because the trends will change as the data is refreshed. Example: “Sales are up year over year, and the East region is the top contributor to current quarter profit” is true today, but may not be true next month.
  • My chart may look good today, but new values may come in that change the scale and make it difficult to see small numbers compared to a very large number. Example: A bar chart showing inventory levels by product looks reasonable today because all products have a stock level between 1 and 50. But next month, a popular new product comes in, and you have 500 of them, which changes the scale of your bar chart and makes every other product’s inventory super small and not easy to compare.
  • I can’t statically highlight outliers because my outliers will change over time. Example: I have a chart that shows manufacturing defects by line, and I want to highlight that the dog treats line has too many defects. I can’t just select that data point on the chart and change the color because next month the dog treats line might be doing fine.

How do we form a message with known metrics and data structure but without specific data values?

When people have asked me about this in the past, I gave an answer similar to the following:

Instead of a message about specific data values, I consider my audience and the metrics they care about and come up with the top 2 or 3 questions they would want to answer from my report. Then I build charts that address those questions and put them in an order that matches the way my intended audience would analyze that information. This might include ordering the visuals appropriately on a summary page as well as creating drillthrough paths to more information based upon items and filters selected to help my user understand the reasons for their current values.

While this isn’t horrible advice, I felt I needed a better answer on this issue. So I sought advice from Andy Kirk, and he responded brilliantly!


To the issue of situations where data is periodically refreshed, I see most encounters (ie. the relationship between reader and content) characterised less by storytelling (the act of the creator) and more by storyforming (the act of the reader). 

Andy Kirk

Storyforming

Andy went on to explain, “What I mean by this is that usually the meaning of the data is unique to each reader and their own knowledge, their own needs, their own decision-contexts. So rather than the creator ‘saying something’ about such frequently changing data in the form of messages or headlines, often it might be more critical to provide visual context in the form of signals (like colours or markers/bandings) that indicate to the reader that what they are looking at is significantly large/small/above average/below average/off-target/on-target/etc. but leave it up to THEM to arrive at their own story.”

Then he gave an example: “I find this context a lot working with a football club here in the UK. Their data is changing every 3-7 days as new matches are played. So the notion of a story is absent from the visuals that I’m creating for their players/management/coaches. They know the subject (indeed better than me, it’s their job!), they don’t need me to create any display that ’spells out’ for them the story/meaning, rather they need – like the classic notion of a dashboard – clear signals about what the data shows in the sense of normal/exceptional/improving/worsening.”

Andy also agreed that a key aspect of storyforming is that interactive controls (slicers, filters, cross-filtering capabilities) in your report consumption tools give the reader the means to overcome the visual chaos that different data shapes may cause through natural variation over time.

Less Eye Rolling, More Storyforming

If you are a BI developer and have rolled your eyes or sighed in frustration when someone mentioned storytelling in data visualization, try thinking about it as storyforming. Make sure you have the right characters (categories and metrics) and major plot points (indicators of size, trend, or variance from target). You are still responsible for choosing appropriate chart types and colors to show the trends and comparisons, but don’t be so focused on the exact data points.

Many reporting tools (including Power BI) allow you to perform relative calculations (comparison to a previous period, variance from goal, variance from average) to dynamically create helpful context and identify trends and outliers. In Power BI, there are custom visuals that allow you to add dynamically generated natural language explanations if you feel you need more explanatory text (ex 1, ex 2, ex 3). And Power BI will soon be getting expression-based formatting for title text in visuals, which can also help with providing insights in the midst of changing values.

But mostly, try to design your report so that users can slice and filter to get to what matters to them. Then let your users fill in the details and meaning for themselves.

Microsoft Technologies, Power BI

Power BI: Where Should My Data Live? Webcast

When you start a Power BI project, you need to decide how and where you should store the data in your dataset. There are three “traditional” options:

  • Imported Model: Data is imported and compressed and stored in the PBIX file, which is then published to the Power BI Service (or Report Server if you are on-prem)
  • Live Connection: Data is stored in Analysis Services and your Power BI dataset is really a reference to the Analysis Services database.
  • DirectQuery: Data remains in the source system and Power BI stores metadata and a reference to the source data, executing live queries when a user interacts with a report

As Power BI has evolved, there are now some variations and additions to those options. Composite models allow you to combine imported data sources and DirectQuery data sources. We also now have dataflows, which allow you to use self-service data prep to define and share reusable data entities.

Each of these options has its advantages and limitations. There is no single right answer of which one you should always pick.

If you have been struggling with this topic, or just want to double-check your thinking, please join me and Kerry Tyler (@AirborneGeek on twitter) for our Denny Cherry & Associates Consulting webcast on April 5th at 12pm Mountain / 2pm Eastern.

The webcast will review your options for where to store data and explain the factors that should be used in determining what option is right for you. Obvious requirements such as data size, license costs and management, and desired data latency will be discussed. We’ll also talk about other factors such as the desire for self-service BI and avoiding data model sprawl. We’ll have content to present, but we are also happy to take questions during the webcast.

Register for the webcast today and join us next Friday, April 5th.

Accessibility, Data Visualization, Microsoft Technologies, Power BI

Power BI Now Has Keyboard Accessible Visual Interactions

The March 2019 release of Power BI Desktop has brought us keyboard accessible visual interactions. One of Power BI’s natural strengths is that you can click on a data point within a visual and have it cross-highlight or cross-filter the other visuals on a page. But keyboard-only users weren’t able to use this feature until now. This greatly raises the accessibility of the Power BI report consumption experience.

Below is a demonstration of interacting with a visual using keyboard commands. Notice how I can select specific data points within the line chart, and the other charts on the page filter based upon the selection.

Keyboard commands can now access visual interactions

If you are a person that prefers to use a keyboard over a mouse, this might also be something you want to try. Relevant keyboard commands include:

  • Ctrl + right arrow: Move focus into the chart area of the visual
  • Tab or Arrow keys: navigate between data points (or legend items in a chart that contains a legend)
  • Enter or Space: select a data point within a visual
  • Ctrl + Enter or Ctrl + Space: select multiple data points within a visual
  • Ctrl + shift + c: clear all selections

I think this was the last big missing piece of keyboard accessibility. I’m excited to see its impact on users. If you try keyboard accessible visual interactions, or are taking advantage of keyboard accessibility in Power BI in general, please let me know how you are liking it. Tweet me at @mmarie or send me a note via my blog contact form.

Azure, Azure Data Factory, Microsoft Technologies

There is Now A Delete Activity in Data Factory V2!

Data Factory can be a great tool for cloud and hybrid data integration. But since its inception, it was less than straightforward how we should move data (copy to another location and delete the original copy).

It is a common practice to load data to blob storage or data lake storage before loading to a database, especially if your data is coming from outside of Azure. We often create a staging area in our data lakes to hold data until it has been loaded to its next destination. Then we delete the data in the staging area once our subsequent load is successful. But before February 2019, there was no Delete activity. We had to write an Azure Function or use a Logic App called by a Web Activity in order to delete a file. I imagine every person who started working with Data Factory had to go and look this up.

But now Data Factory V2 has a Delete activity.

Data Factory V2 Copy activity followed by a Delete activity

How the Delete Activity Works

The Delete activity can delete from the following data stores:

  • Azure blob storage
  • ADLS Gen 1
  • ADLS Gen 2
  • File systems
  • FTP
  • SFTP
  • Amazon S3

You can delete files or folders. You can also specifiy whether you want to delete recursively (delete including all subfolders of the specified folder). If you want to delete files/folders from a file system on a private network (e.g., on premises), you will need to use a self-hosted integration runtime running version 3.14 or higher. Data Factory will need write access to your data store in order to perform the delete.

You can log the deleted file names as part of the Delete activity. It requires you to provide a blob storage or ADLS Gen 1 or 2 account as a place to write the logs.

You can parameterize the following properties in the Delete activity itself:

  • Timeout
  • Retry
  • Delete file recursively
  • Max concurrent connections
  • Enable Logging
  • Logging folder path

You can also parameterize your dataset as usual.

All that’s required in the Delete activity is an activity name and dataset. The other properties are optional. Just be sure you have specified the appropriate file path. Maybe try this out in dev before you accidentally delete your way through prod.

To delete all contents of a folder (including subfolders), specify the folder path in your dataset and leave the file name blank, then check the box for “Delete file recursively”.

You can use a wildcard (*) to specify files, but it cannot be used for folders.

Here’s to much more efficient development of data movement pipelines in Azure Data Factory in V2.