Accessibility, Conferences, Microsoft Technologies, PASS Summit, Power BI

I’m Speaking at Virtual PASS Summit 2020

PASS Summit has gone virtual this year, but that isn’t keeping PASS from delivering a good lineup of speakers and activities. I’m excited to be presenting a pre-con and two regular sessions this year. I know virtual delivery changes the interaction between audience and speaker, and I’m going to do everything I can to make my sessions more than just standard lecture and demo to keep things interesting.

Building Power BI Reports that Communicate Insights and Engage People (Pre-Con)

If you are into Power BI or data visualization, check out my pre-con session. It’s called Building Power BI Reports that Communicate Insights and Engage People. Unless we’ve had data visualization training, the way we learn to make reports is by copying reports that others have made. But that assumes other people were designing intentionally for human consumption. Another issue is that we often mimic example reports from tool vendors. That can be very helpful with the technical aspects of getting content on the page, but we often overlook the design aspects of reports that can make or break their usability and effectiveness in communicating information. My pre-con will begin with discussion on how humans interpret data visualizations and how you can use that to your advantage to make better, more consumable visualizations. We’ll take those lessons and apply them specifically to Power BI and then add on some tips and tricks. Throughout the day, there will be hands-on exercises and opportunities for group conversation. And you’ll receive some resources to take with you to help you continue to improve your report designs.

Agenda slide from my pre-con session: 1) Defining Success, 2) Message & Story, 3) Designing a Visual, 4) Refine Your Report 5) Applied Power BI 6) Power BI Tricks 7) Wrap-Up
Agenda for my PASS Summit pre-con titled Building Power BI Reports that Communicate Insights and Engage People

This session is geared toward people that have at least basic familiarity with Power BI Desktop (if you can populate a bar chart on a report page, that’s good enough). If you have never opened Power BI Desktop, we might move a little fast, but you are welcome to join us and give it a try. If you are pretty good with Power BI Desktop, but you want to improve your data visualization skills, this session could also be a good fit for you. I hope you’ll register and join my pre-con.

Implementing Data-Driven Storytelling Techniques in Power BI

Data storytelling is a popular concept, but the techniques to implement storytelling in Power BI can be a bit elusive, especially when you have data values that change as the data is refreshed. In this session, we’ll talk about what is meant by story. Then I’ll introduce you to tool-agnostic techniques for data storytelling and show you how you can use them in Power BI. We’ll also discuss the visual hierarchy within a page and how that affects your story. You can view my session description here.

Inclusive Presentation Design

I’m also delivering a professional development session for those of us that give presentations. Most speakers have good intentions and are excited to share their knowledge and perspective, but we often exclude audience members with our presentation design. Join me in this session to discuss how to design your presentation materials with appropriate content formatted to maximize learning for your whole audience. You’ll gain a better understanding of how to enhance your delivery to make an impact on those with varying abilities to see, hear, and understand your presentation. You can view my presentation description here.

Other Pre-Cons from My Brilliant Co-Workers

If you aren’t into report design, my DCAC coworkers are delivering pre-cons that may interest you.

Denny Cherry is doing a pre-con session on Microsoft Azure Platform Infrastructure.

John Morehouse is talking about Avoiding the Storms When Migrating to Azure.

I hope you’ll join one of us for a pre-con as well as our regular sessions. With PASS Summit being virtual, the lower price and removal of travel requirements may make this conference more accessible to some who haven’t been able to attend in past years. Be sure to get yourself registered and spread the word to colleagues.

Azure, Azure Data Factory, Microsoft Technologies, Power BI

Refreshing a Power BI Dataset in Azure Data Factory

I recently needed to ensure that a Power BI imported dataset would be refreshed after populating data in my data mart. I was already using Azure Data Factory to populate the data mart, so the most efficient thing to do was to call a pipeline at the end of my data load process to refresh the Power BI dataset.

Power BI offers REST APIs to programmatically refresh your data. For Data Factory to use them, you need to register an app (service principal) in AAD and give it the appropriate permissions in Power BI and to an Azure key vault.

I’m not the first to tackle this subject. Dave Ruijter has a great blog post with code and a step-by-step explanation of how to use Data Factory to refresh a Power BI dataset. I started with his code and added onto it. Before I jump into explaining my additions, let’s walk through the initial activities in the pipeline.

ADF pipeline that uses web activities to gets secrets from AKV, get an AAD auth token, and call the Power BI API to refresh a dataset. Then and Until activity and an If activity are executed.
Refresh Power BI Dataset Pipeline in Data Factory

Before you can use this pipeline, you must have:

  • an app registration in Azure AD with a secret
  • a key vault that contains the Tenant ID, Client ID of your app registration, and the secret from your app registration as separate secrets.
  • granted the data factory managed identity access to the keys in the key vault
  • allowed service principals to use the Power BI REST APIs in in the Power BI tenant settings
  • granted the service principal admin access to the workspace containing your dataset

For more information on these setup steps, read Dave’s post.

The pipeline contains several parameters that need to be populated for execution.

ADF pipeline parameters

The first seven parameters are related to the key vault. The last two are related to Power BI. You need to provide the name and version of each of the three secrets in the key vault. The KeyVaultDNSName should be https://mykeyvaultname.vault.azure.net/ (replace mykeyvaultname with the actual name of your key vault). You can get your Power BI workspace ID and dataset ID from the url when you navigate to your dataset settings.

The “Get TenantId from AKV” activity retrieves the tenant ID from the key vault. The “Get ClientId from AKV” retrieves the Client ID from the key vault. The “Get Secret from AKV” activity retrieves the app registration secret from the key vault. Once all three of these activities have completed, Data Factory executes the “Get AAD Token” activity, which retrieves an auth token so we can make a call to the Power BI API.

One thing to note is that this pipeline relies on a specified version of each key vault secret. If you always want to use the current version, you can delete the SecretVersion_TenantID, SecretVersion_SPClientID, and SecretVersion_SPSecret parameters. Then change the expression used in the URL property in each of the three web activities .

For example, the URL to get the tenant ID is currently:

@concat(pipeline().parameters.KeyVaultDNSName,'secrets/',pipeline().parameters.SecretName_TenantId,'/',pipeline().parameters.SecretVersion_TenantId,'?api-version=7.0')

To always refer to the current version, remove the slash and the reference to the SecretVersion_TenantID parameter so it looks like this:

@concat(pipeline().parameters.KeyVaultDNSName,'secrets/',pipeline().parameters.SecretName_TenantId,'?api-version=7.0')

The “Call Dataset Refresh” activity is where we make the call to the Power BI API. It is doing a POST to https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes and passes the previously obtained auth token in the header.

This is where the original pipeline ends and my additions begin.

Getting the Refresh Status

When you call the Power BI API to execute the data refresh, it is an asynchronous call. This means that the ADF activity will show success if the call is made successfully rather than waiting for the refresh to complete successfully.

We have to add a polling pattern to periodically check on the status of the refresh until it is complete.

We start with an until activity. In the settings of the until loop, we set the expression so that the loop executes until the RefreshStatus variable is not equal to “Unknown”. (I added the RefreshStatus variable in my version of the pipeline with a default value of “Unknown”.) When a dataset is refreshing, “Unknown” is the status returned until it completes or fails.

ADF Until activity settings

Inside of the “Until Refresh Complete” activity are three inner activities.

ADF Until activity contents

The “Wait1” activity gives the dataset refresh a chance to execute before we check the status. I have it configured to 30 seconds, but you can change that to suit your needs. Next we get the status of the refresh.

This web activity does a GET to the same url we used to start the dataset refresh, but it adds a parameter on the end.

https://docs.microsoft.com/en-us/resGET https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/refreshes?$top={$top}

The API doesn’t accept a request ID for the newly initiated refresh, so we get the last initiated refresh by setting top equal to 1 and assume that is the refresh for which we want the status.

The API provides a JSON response containing an array called value with a property called status.

In the “Set RefreshStatus” activity, we retrieve the status value from the previous activity and set the value of the RefreshStatus variable to that value.

Setting the value of the RefreshStatus variable in the ADF pipeline

We want the status value in the first object in the value array.

The until activity then checks the value of the RefreshStatus variable. If your dataset refresh is complete, it will have a status of “Completed”. If it failed, the status returned will be “Failed”.

The If activity checks the refresh status.

If activity expression in the ADF pipeline

If the refresh status is “Completed”, the pipeline execution is finished. If the pipeline activity isn’t “Completed”, then we can assume the refresh has failed. If the dataset refresh fails, we want the pipeline to fail.

There isn’t a built-in way to cause the pipeline to fail so we use a web activity to throw a bad request.

We do a POST to an invalid URL. This causes the activity to fail, which then causes the pipeline to fail.

Since this pipeline has no dependencies on datasets or linked services, you can just grab my code from GitHub and use it in your data factory.

Data Visualization, Microsoft Technologies, Power BI

Power BI Data Viz Makeover: From Drab to Fab

On July 11 at 3pm MDT, Rob Farley and I will be hosting a webinar on report design in Power BI. We will take a report that does not deliver insights, discuss what we think is missing from the report and how we would change it, and then share some tips from our report redesign.

Rob and I approach data visualization a bit differently, but we share a common goal of producing reports that are clear, usable, and useful. It’s easy to get caught up building shiny, useless things that show off tech at the expense of information. We want to give you real examples of how to improve your reports to provide the right information as well as a good user experience.

We’ll reserve some time to answer your questions and comments at the end. Come chat Power BI data viz with us.

You can register for the webinar at https://www.powerbidays.com/virtualevent/colorado-power-bi-days-2020-07-11/.

Come for the data viz tips, stay for the witty banter!

Data Visualization, Microsoft Technologies, Power BI

Data Visualization, Context, and Domain Expertise

I recently posted a graph to twitter and asked people to explain it.

Let’s look at the graph.

Bar chart showing low levels of steps in April until April 25th, when they increase about 3x and remain at that level through May.
Chart from Fitbit showing my step count from April 1 through May 23.

The graph is from Fitbit. It shows the number of steps I took each day between April 1 and May 23. We can see that I had a very low number of daily steps between April 1 and April 24. Then there is a spike where my steps almost quadruple on April 25. They decrease a bit for a couple of days while remaining well above the previous average. Then my steps increase again, staying up around 10,000 steps.

The Responses

The responses I received to my tweet largely fell into 3 categories:

  1. Complaints about the x-axis label
  2. Simple interpretations of the graph saying that the steps increased on April 25 and remained higher, often accompanied by statements that there isn’t enough data to explain why that happened.
  3. Guesses as to why the steps increased and then remained higher.

The X-Axis Label

Many of my twitter friends create data visualizations for fun and profit. It didn’t surprise me that they weren’t happy with the x-axis.

There are multiple x-axis labels that show the month and year, but the bars are at the day level. It’s unusual to see the Apr ’20 label repeated 4 times as we see in this graph. It’s not necessarily inaccurate, but its imprecision goes against convention.

The fact that multiple people commented on it demonstrates to me that it is more distracting than helpful. The way you format your data visualizations can be distracting. This is why I tweet and talk about bad charts and how to improve them for human consumption.

Literal Interpretation

Some people were only comfortable sticking with the information available in the chart. They acknowledged that the steps went up. Some said there were too many possible explanations to narrow it down to a certain reason why.

Speculative Explanations

I enjoyed the many guesses as to why my steps increased:

  • I suddenly got motivated to exercise more
  • I moved my office further from my bedroom
  • I’m building a really big staircase
  • The device used to track my steps changed
  • I started playing Just Dance every day
  • Covid-19 lockdown ended

A few people who know me (or at least pay attention to my twitter feed) had some insight.

I did get a new dog during the timeframe, but I got her on April 28th, not April 25th.

Also, the weather did warm up about 12 degrees Fahrenheit over the timeframe.

The Necessary Context

For the curious, here’s the real explanation.

I lost my dog Buster on April 4. He was with me for over 9 years, and he was my best friend. He was suddenly not feeling well at the end of March, and he was diagnosed with cancer. He declined rapidly, and I stayed with him on the living room floor until it was time to say goodbye. During those first 4 days of April, I really only left the living room to take Buster outside. I slept a lot that weekend and didn’t move much because I was sad.

With losing Buster, everything associated with Covid-19, and some other personal issues, I was depressed for the next few weeks. But I was also very busy with work. I had no energy to do anything else after work. And there wasn’t much to do since my city and state were on lockdown for Covid-19.

On April 25, I decided that the only way to get out of the emotional hole I was in was to get up and do something, so I walked a few miles around a nearby park. I came home and looked on PetFinder.com to see if there was a dog I’d like to adopt, and I came across a bulldog mix at Foothills Animal Shelter. I hadn’t cleaned my house since Buster died (see: depression). So I spent the rest of the weekend cleaning and dog-proofing just in case I brought the dog home.

On April 28, I adopted Izzy, a bulldog/boxer mix.

Izzy likes to walk. We walk between 2 and 4 miles each day. She is most of the reason the step count remained high throughout May.

Nice Dog. So What?

I hope what you’ll take away from this story is that to really deliver insights, you need to know the subject of your data visualizations. You need domain expertise. And it helps to have your own observations or other datasets to support the events you are visualizing.

If you don’t know me, any of the speculations could be the right answer. And the most you could do with my Fitbit data is to provide descriptive analysis, simply saying what happened without going into why. Many people who follow me on Twitter knew I recently got a dog. That explains the increase in step count from May 28 going forward. But it doesn’t address May 25th. Without the additional context of my step count in other months, you don’t know what my average step count is outside of this view. You wouldn’t know if my average count is normally closer to 3,000 or 10,000 because you don’t have that data. This is a perfect example of where you would need more data, more months of this data as well as additional datasets, to understand what is really going on. Sometimes there are actual datasets we can acquire, like weather data or Covid-19 lockdown dates. But there is no dataset for me losing Buster or struggling with depression.

This is part of why I prefer the term “data-informed decisions” over “data-driven decisions”. We often don’t have all the data to really understand what is going on. Technology has improved (see: Power BI) to make it quicker and easier to mash up data to provide a more complete picture. But we’ll still have to make decisions based upon incomplete data. If we have domain expertise, we may need to review data and ask questions to get better insights, and then rely on our knowledge and experiences to complete the picture.

This chart is also a good representation of a common issue in business intelligence: we often settle for only descriptive analytics. It may even have been a struggle just to get there. Let’s say I’m trying to become more active and using step count as a metric. You see this chart and see the increase in steps and say “That’s great. Do whatever you did last month to increase your steps even more.” Am I supposed to get another dog? Get less depressed?

Let’s pretend that my chart is not about my step count but is an operational report for an organization. That one chart tells you a trend of a single measure. We need more data, more visuals for this information to be impactful. The additional data adds necessary context. If this were a Power BI report, we might use interactivity to provide navigation paths to explore common questions about the data and to help identify what is influencing the current trend. Then you could use the report to facilitate a more productive conversation. I’m not addressing AI here, but after understanding the data and decisions made from it, you might introduce some machine learning to automate the analysis process.

Just having a report on something is not enough. The goal of data visualization is not to show off your data (if your service/product is data, that’s a different thing). It’s to help provide meaningful information to people so they can make decisions and take action. In order to do that, we need to understand our audience, the domain in which they are operating, and the questions they are trying to answer. Data visualization tools make it easy to get things on the page, but I hope you are designing your visualizations purposefully to facilitate data-informed decisions.

Microsoft Technologies, Power BI

An Updated Version of the Power BI Enterprise Deployment Whitepaper is Available

A new version of the Microsoft whitepaper “Planning a Power BI Enterprise Deployment” is now available. Once again, Melissa Coates (b|t) and Chris Webb (b|t) are the authors. I was lucky enough to be the tech editor again on this version, so I’m excited to see the new information be released to the public.

There were quite a few updates this time. Here are some of the highlights:

  • Section 3, “Power BI Architectural Choices”, has updated information on dataflows and Power BI Premium. It also includes a nice section clarifying the options available for embedding Power BI content.
  • Section 4, “Power BI Licensing and User Management”, has been updating to include information on self-service purchasing.
  • Section 5, “Power BI Source Data Considerations” now includes information on dataflows.
  • Section 6, “Power BI Dataset Storage Options” now contains information about Automatic Page Refresh and large models.
  • Section 7, “Power BI Data Refresh and Data Gateway” now mentions the Power Platform Admin Center. It also discusses dataflow refreshes in addition to dataset refreshes. And more information has been added regarding the use of gateway clusters for load balancing and high availability.
  • Section 8, “Power BI Dataset and Report Development Considerations” contains new information on shared datasets and .pbids (Power BI Data Source) files. It also has a new section providing guidance on information design and accessibility. And it provides updated information on the use of custom visuals.
  • Section 9, “Power BI Collaboration, Sharing and Distribution”, has been updated to reflect the new workspace experience. It also discusses shared and certified datasets and the new deployment pipelines. It also contains a nice decision tree to help you determine whether to use apps, workspaces, or sharing.
  • Section 10, “Power BI Administration”, has new recommendations for tenant settings. It also discusses protection metrics, custom help menus, custom branding as well as providing new information on managing workspaces and dataflows. And it discusses the new activity log and related PowerShell modules.
  • Section 11, “Power BI Security and Data Protection”, now discusses the roles in the new workspace experience as well as sensitivity labels and Microsoft Information Protection.
  • An updated list of deprecated items can be found in section 12, “Power BI Deprecated Items”.
  • Section 13, “Support, Learning, and Third-Party Tools” contains a great list of helpful resources for the Power BI practitioner.

I hope you’ll take a glance through the updated whitepaper and catch up on all the new information. Happy reading!

Conferences, Microsoft Technologies, Power BI

Power Up: Exploring the Power BI Ecosystem, May 27-28

Next week I’m speaking at at the Dynamic Communities Power Up event titled “Exploring the Power BI Ecosystem“. It takes place on May 27 & 28, 2020. This exciting 2-Day virtual event is designed to ensure attendees have a complete view of the Power BI product and surrounding ecosystem, provide expanded knowledge of the core components and showcase the possibilities for continued exploration and innovation.

Sessions during the event are 2.5 hours long, to really give you time to get into a topic. There are healthy 45-minute breaks between sessions to give you time to attend to personal matters. And the sessions are recorded to give you a chance to catch anything you miss. Some sessions, including mine, offer a take-home exercise to help solidify concepts discussed during the session.

I’m presenting Data Visualization and Storytelling on May 28 at 9am EST/1pm UTC. In this session, you will learn how to build eye-catching Power BI reports to support decision making. You will also see the importance and a realistic approach to data storytelling.

The following topics will be showcased through practical examples:

  • Creating beautiful reports: prioritizing your KPIs, playing with colors, grid
  • Choosing the best chart to illustrate your point
  • Introduction to the concept of Data Storytelling
  • Implementing quality checks on your report design
  • Implementing navigation in your report: bookmarks, drill-through, page-report tooltips, interactive Q&A

This training is a paid event, but it’s just $399 for the full 2 days. This training is great if you are a beginner-to-intermediate Power BI user trying to round out your skills across the many areas of the Power BI suite. You can head over to the website to register. I hope to see you there!

Accessibility, Microsoft Technologies, Power BI

Check Out My MBAS Presentation on Power BI Report Accessibility

I had the privilege of working with Tessa Hurr (PM on the Power BI team) on a presentation for the 2020 Microsoft Business Applications Summit (MBAS) about five features in Power BI that increase report accessibility. This 23-minute presentation is almost entirely demos, and only a few slides. While we talk about some features such as alt text and tab order that are primarily used for accessibility purposes, we also talk about how chart titles, header tooltips, and report themes can be used to make your report more accessible.

Presentation slide listing Five Features that Increase Report Accessibility: tab order, chart titles, header tooltips, alt text, and report themes
Slide from the MBAS 2020 session Creating accessible reports in Power BI

The conference was entirely online this year, and you can catch the sessions on demand now. I hope you’ll take some time to watch my session as well as the other great content that came from the conference. You can watch my session on the MBAS website.

Azure, Azure Data Factory, Logic Apps, Microsoft Technologies

Using Logic Apps in a Data Factory Execution Framework – Part 1

Data Factory allows parameterization in many parts of our solutions. We can parameterize things such as connection information in linked services as well as blob storage containers and files in datasets. We can also parameterize certain properties in activities. For instance, we can write an expression to determine the stored procedure to be executed in a Stored Procedure Activity or the filename in the sink (destination) of a Copy Activity.

But we cannot parameterize the invoked pipeline in an Execute Pipeline Activity. This means we need to find workarounds in order to have a metadata-driven execution framework. What I mean by metadata-driven execution framework is that data is stored in a datastore (in my case, a SQL Database) and used to determine what pipelines and activities get executed. With this type of framework, if I don’t want a specific pipeline to execute, I would just update my data in the datastore rather than delete the pipeline execution from the parent pipeline. We’ve been doing this type of development in SSIS for years, and Biml has played a big part in that. But SSIS allows us to parameterize the Execute Package Task.

Since we can’t implement this parameterized execution of pipelines natively, we need to look for something that Data Factory can call to accomplish the task. Paul Andrew has a nice framework that uses Azure Functions. I was working on a Data Factory solution for a client who doesn’t have C# or PowerShell developers on hand to help with the ELT process, so we needed to explore a low-code solution.

While there is no Logic App activity in Data Factory, we can use a Web Activity to call the Logic App. I might have a pipeline that looks something like what is pictured below.

Data Factory pipeline that uses a Stored Procedure to capture the start of the pipeline, a Lookup to get the list of files to be copied, a ForEach loop to copy each of the files, and a Stored Procedure to mark the end of the pipeline.
Staging pipeline that copies files from Azure Data Lake Storage to Azure SQL Database

Within the ForEach loop is a single Web Activity.

Data Factory Pipeline Web Activity calling a Logic App. An expression populates the url, and a Get m
Web Activity that calls a Logic App

I used some variables and parameters in an expression to populate the URL so it would be dynamic. I used a GET method in the call.

My initial version of my Logic App is shown below.

Logic App workflow with an HTTP request trigger. 1) Create a pipeline run. 2) Initialize Variable. 3) Until loop. 4) HTTP Response.
Logic App that executes a Data Factory pipeline and waits for it to complete before returning a response

I added path parameters in my HTTP request trigger to allow me to capture the information I need to execute the appropriate pipeline. For me this included the pipeline name, a data source ID, and a country. Your parameters would vary according to your requirements.

HTTP Request trigger in a logic app with 3 path parameters: pipeline, country, Data Source ID
HTTP Request trigger in my Logic App

Logic apps has an action called “Create a pipeline run”. You tell it which data factory, which pipeline, and any parameter values needed for the pipeline execution.

Create a pipeline run action in a logic app. Data Factory Pipeline Name is populated by a parameter. The pipeline parameters are populated by a mix of static JSON and parameters.
Create a pipeline run action in my Logic App

At this point in the workflow, our pipeline would be executing. But now we need to know when it has finished. That’s what the Initialize Variable and Until Loop actions are handling. I created a string variable called Pipeline Status and set the default value to “InProgress”. My Until loop action checks my pipeline execution status. If it’s still running, it waits 5 seconds, gets the new status, and assigns that status to the variable. This repeats until the pipeline execution is no longer in progress.

Here’s the expression I used to check whether the pipeline execution is still running:

@and(not(equals(variables('PipelineStatus'), 'InProgress')),
not(equals(variables('PipelineStatus'), 'Queued')))
Until loop in a logic app. Checks status of pipeline run. 1) Delay action. 2) Get a pipeline run. 3) Set variable.
Until loop in my Logic App to dynamically execute a Data Factory pipeline

Once the pipeline execution is complete, an HTTP response with the pipeline status is sent back to the caller.

HTTP Response action with status code 200 and pipeline status value in the body.
HTTP Response action in my Logic App

This is all great until you find out that Logic Apps will experience an HTTP timeout if the request takes more than 2 minutes.

Do you have any pipelines that take longer than two minutes to execute? If so, you need to change your solution to handle this. Note that you would have the same issue with Azure Functions, although it would give you 230 seconds instead of 120 seconds before it timed out. We need to switch to an asynchronous call to support long running pipelines. Paul has already done this in his framework using Azure Functions. In Logic Apps, we can change our response to an asynchronous response and then implement a polling pattern to check the status. We could alternatively implement a webhook action. I’ll write about updating the solution to handle long running pipelines in a future post.

Consulting, Data Visualization, Microsoft Technologies, Power BI

Stress Cases and Data Visualization

Times are stressful right now. There is an ongoing pandemic affecting people’s health and livelihoods. Schedules are messed up, kids are home. People who aren’t used to working remotely are fumbling through learning how to work from home. And then there are the normal stresses that aren’t taking a break just because there is a pandemic: arguments with family members, home repairs, student loan debt.

Woman with her head in her hands
Life is stressful right now. It’s ok to not be ok.

I recently read the book Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. I discovered it because I read another book by Sara Wachter-Boettcher, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech that encourages us to look at the technology around us and consider how it can and has negatively affected people. If you work in analytics or web design, I recommend both books. She approaches design more from the context of user interfaces and web apps, but there is a lot of applicability to data visualization.

What’s a Stress Case?

Design for Real Life encourages us to consider stress cases. The idea behind stress cases is to consider our users as full, complex, sometimes distracted, vulnerable, and stressed-out people. They aren’t just happy personas who always click the right buttons and share all of our knowledge and assumptions.

Certain use cases, often labeled as edge cases, often go unidentified or dismissed as too infrequent. User interface designers, including data visualization designers, need to train ourselves to seek out those stress cases and understand how our design choices affect people in those usage scenarios. This helps us to minimize harm done by our design. But also, as stated in the book,

When we make things for people at their worst, they’ll work that much better when people are at their best.

Design for Real Life

There is a toxic trend in tech that we developers make things for ourselves, valuing the developers and designers over the users. Designing for stress cases helps us change that trend. We need to value our users’ time, understand our biases as designers, and make features that match our users’ priorities. A very applicable part of this for data visualization is to consider all the contexts that usage scenarios might happen.

Stress Cases in Data Visualization

Whether you are designing a visualization to be embedded in an app, included in an online news article, or published in a corporate dashboard, you can design for stress cases.

Let’s think about a corporate HR report built in Power BI that shows a prediction of employee retention. HR and team managers may use this information to help them assign projects or give promotions and raises. This dashboard and the conversation around it may address information related to identity (race, gender, sexual orientation). The use of the dashboard may affect someone’s compensation and job satisfaction.

How can we design for stress cases here?

We should be asking if we have the right (appropriate and validated) data to accomplish our goals, not just using whatever we can get our hands on. We definitely need to consider laws against discrimination that might be applicable to how we use demographic data. We need to consider the cost or harm done if we allow users to see these predictions down to the individual employee rather than an aggregation. And we should always be asking what actions people are taking as a result of using our dashboard. Have we considered any unintended consequences?

We want to use plain, easy to understand language. We want to explain our data sources and (at least high-level) how the predictive algorithm works. And we need to explain how we intend for this dashboard to be used. Basically, we want to be as transparent and easy to understand as possible. This dashboard can affect people’s livelihoods and happiness as well as the operational and financial health of the business. These data points are more than just pretty dots – they are people.

We also want to make sure our dashboard is easy to use. Think about digital affordances. We want it to be clear what happens when someone interacts with it. We can get fancy with bookmarks and editing interactions in a Power BI report. But does our intended user understand what is shown when they click a button that loads a bookmark? Can the intended user easily get what they need? Or do they have to jump through hoops to get to the right page and set the right filters?

Let’s say Sarah is a new manager at an organization that is trying to improve a high attrition rate, and she needs this information for a meeting with her boss and peers. She’s sitting in an online meeting using a 3-year-old tablet with 10 minutes of remaining battery life. Her two-year-old child is running around in the next the room. The group is discussing whether they should let some people go and then try to rebuild their teams. Can Sarah easily pull up the dashboard on her tablet and set the correct filters to see attrition numbers and retention factors for her team while trying to put on a brave face in front of her colleagues as she races against the tablet’s remaining battery time?

Visualizing the Coronavirus

Currently, it seems everyone in the data community wants to visualize the spread of COVID-19. Just look at the Power BI Data Stories Gallery. I get the appeal of using new and relevant data for practice, but consider your audience and the consequences of making it public. Do you trust and understand your data? Have you considered how the chart types and colors you choose can misrepresent the data? Who is it helping for you to publish your visualization out into the world? What message is your visualization sending? Is it unintentionally adding stress for your already stressed-out twitter followers? Most of us are not working with city officials or healthcare practitioners. We are visualizing this “for fun”. People are showing off Power BI/Tableau/D3 skills and not necessarily communicating clearly with purpose.

Your twitter friend/follower Frank is really concerned about the pandemic. His daughter lost her job at a restaurant. He’s worried about lining up projects for the next few months. And his wife is immunocompromised and in a higher risk group for getting COVID-19. He feels scared and helpless.

Your friend Dave is worried about his mother who lives alone a few hours away from him. He is bombarded by messages every day that say older people are more at risk of getting COVID-19. His mom says she is fine, but he wants her to come and stay with him. It’s all he’s thought about all day.

If you know nothing about epidemiology and don’t understand the response from various national, state, and local governments in your data, and you publish your viz with a choropleth (filled) map covered in shades of red, how will that affect Frank and Dave and everyone else who reads it? Did you add value to the COVID-19 conversation, or just increase confusion and fear?

Amanda Makulec published Ten Considerations Before You Create Another Chart About COVID-19 with some good advice, including the following.

To sum it up — #vizresponsibly; which may mean not publishing your visualizations in the public domain at all.

Amanda Makulec

Stress cases can be related to a crisis, mundane technology failures, or just situations that are stressful in the context of the user’s life. If you are visualizing data, remember there are people who may be interacting with our visuals in less than ideal circumstances. We need to design for them as much as for our ideal use cases.

If you have real-life examples of designing for stress cases in data visualization, please share them in the comments or tweet me at @mmarie.

Accessibility, Data Visualization, Power BI

PolicyViz Podcast Episode on Accessibility

I had the pleasure of talking with Jon Schwabish about accessibility in data visualization. The episode was released this week. You can check it out at https://policyviz.com/podcast/episode-169-meagan-longoria/.

Policy Viz
PolicyViz helps you do a better job processing, analyzing, sharing, and presenting your data.

If you’ve never thought about accessibility in data visualization before, here is what I want you to know.

  1. Your explanatory data visualization should be communicating something to your intended audience. You can’t assume people in your intended audience do not have a disability. People with disabilities want to consume data visualizations, too.
  2. We can’t make everything 100% usable for everyone. But that doesn’t mean we should do nothing. Achieving accessibility is a shared responsibility of the tool maker and the visualization designer. There are several things we can do to increase accessibility using any data visualization tool that don’t require much effort. Regardless of the tool you use, you can usually control things like color contrast, keyboard tab/reading order, and removing or replacing jargon.
  3. Accessible design may seem foreign or tedious in the beginning. We tend to design for ourselves because that is the user we understand most. But if we start adding tasks like checking color contrast and setting reading order into our normal design routine, it just becomes habit. Over time, those accessible design habits become easier and more intuitive.

I hope that one day accessible design will just be design. You can be part of that effort, whether you are a professional designer, a database administrator just trying to show some performance statistics, or an analyst putting together a report.

Listen to the podcast for my top 5 things you should do to make your data visualizations more accessible.