Microsoft Technologies, SSIS

Using a tnsnames.ora file with the Microsoft Connector for Oracle in SSIS

One of the nice things about the Microsoft Connector for Oracle is that it doesn’t require installation of an Oracle client. But because of this, you may not have the expected settings and files on the computer where your SSIS package is running.

A client ran into this recently, and the answer was to create a system environment variable.

Although you can now specify an Oracle database using the EzConnect format, it’s still very common to use a tnsnames.ora file. This file specifies a connection name and connection details for an Oracle database. It’s helpful in that you can provide a “friendly name” to the database specified in the file and then reference the file from multiple client tools.

When using a tnsnames.ora file, the connector needs to know the location of the folder containing that file. This location can be specified by a system environment variable or a registry key.

If you have installed an Oracle client, the tnsnames.ora file will likely be located in the ORACLE_HOME\network\admin folder and you will already have the required registry key and/or environment variable.

If you have not installed an Oracle client, you will likely need to add an environment variable on the computer running SSIS.

A Windows 10 Window labeled New System Variable has two text boxes to populate. The first is labeled Variable name and is populated with the value "TNS_ADMIN". The second is labeled Variable value and is not populated.
The New System Variable window in WIndows 10

To add the environment variable in Windows 10, Windows 11 or Windows Server 2022:

  1. Right-click the Start icon and select System.
  2. In the Settings window, select Advanced System Settings.
  3. On the Advanced tab of the System Properties window, select Environment Variables.
  4. In the Environment Variables window under System, select New.
  5. In the New System Variable window, enter “TNS_ADMIN” for the Variable name and the correct path to the folder that contains your tnsnames.ora file for the Variable value.
  6. Select OK in the New System Variable, Environment Variables, and System Properties windows.

Vague Error Message

If the Oracle connector cannot find the tnsnames.ora file, it doesn’t return a detailed error message to tell you this. Instead, you get the generic “DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER” telling you the AcquireConnection method call to the connection manager failed. Basically, it assumes that you are supplying the connection info in EzConnect format and then can’t find the database at the friendly name you provided in the connection manager. Microsoft Support helped us determine that this was the cause of our connectivity issues after we logged a support request.

I hope this helps someone avoid time spent troubleshooting. If you have more tips about the Microsoft Connector for Oracle or tnsnames.ora files, feel free to leave them in the comments.

Data Visualization, Power BI, Workout Wednesday

Custom labels on bar and column charts in Power BI

Did you know that you can create labels on bar charts that don’t use the fields in the field wells? You absolutely can!

I did this in the exercise for Workout Wednesday 2023 for Power BI Week 20.

A Power BI report showing small multiple column charts with custom labels on each column.
Power BI report containing a column chart with small multiples that show custom labels on each column.

Notice the label on each column that shows the year and average game length in h:mm format.

These custom labels are available for any bar or column chart – small multiples are not required.

How to add custom labels

The setting for these labels is a little bit hidden. On my column chart, I went to Format Pane, located the Data labels section, and found the Values area. Under Values, there is a toggle for Custom label that must first be enabled before populating the Field below it.

You can drag any field in the fields list into the custom label field. It only accepts one field, so if you want to display multiple values, you’ll need to create a custom column to concatenate values. That is exactly what I did in my example report.

You can control the orientation and position of the labels (mine are set to horizontal and inside base). You can also format the label values and background.

Happy labeling!

SQL Server, SSIS

The Many Oracle Connectors for SSIS

I recently worked with a client who was upgrading and deploying several SSIS projects to a new server. The SSIS packages connected to an Oracle database in various tasks. There were:

  • Data Flows that used the Oracle Source and Oracle Destination
  • Data Flows that used an OLE DB connection to Oracle
  • Execute SQL tasks that connected to Oracle via an ADO.Net connector

We needed to make sure we had the most current version of the drivers required to make the package work.

This turned out to be an adventure involving 3 different drivers and plans to refactor, so I’m documenting some of it here in case it helps someone else.

SSIS Target Versions and Visual Studio Versions

It’s important to understand the relationship between Visual Studio and SSIS target versions. In Visual Studio, we must install an extension to allow us to develop SSIS packages. The version of Visual Studio is different but related to the SSIS target version. Visual Studio 2019 with the 2019 SSIS extension supports SSIS versions 2012 through 2022. Visual Studio 2022 with the 2022 SSIS extension supports SSIS versions 2017 through 2022.

The target version is set in the project configuration properties. This target version should match the version of SSIS running on the server where you plan to deploy the SSIS packages.

Oracle Driver Changes since SSIS 2017

In the olden days (pre-SQL 2019) we used the Attunity Connector to connect to Oracle because it provided the best performance. Attunity was acquired by Qlik in 2019, and Microsoft took over maintaining and distributing the connectors. If you need the Attunity connectors for SSIS versions 2012 – 2017, you will find the links for the downloads here.

For target SQL Server versions 2019 through 2022, we now have the Microsoft Connector for Oracle. This new connector is interesting because we are no longer required to install an Oracle client to make it work. It also allows us to connect directly to an Oracle server without having to use a TNSNAMES.ORA file if we so choose, but the TNSNames file can still be used if desired.

There are separate downloads of the Microsoft Connector for Oracle for SQL Server 2019 and 2022. And you have to download the 32-bit driver and 64-bit driver separately.

Different Drivers for Different Connection Types

Attunity wasn’t the only way to connect to Oracle using SSIS. As noted above, you could also use a .Net provider for Oracle or an OLE DB provider for Oracle. To reduce complexity and increase supportability, I would advise you that you probably want to consolidate to use a single driver to connect to Oracle as much as possible. But note that the Microsoft Connector for Oracle cannot be used in an Execute SQL task, so you might still end up with two different connectors without a lot of refactoring.

This gets a bit confusing due to naming, so I’m linking to the current version of drivers (as of May 2023) to be used with SSIS target version 2022. These are the drivers I’m aware of that I’m sure are supported by either Microsoft or Oracle. There are some other drivers out there that are either no longer maintained or provided by third parties.

Connection TypeDriverProvider Name in Visual Studio
Oracle ConnectionMicrosoft Connector for OracleOracle Connection Manager
OLE DB ConnectionODAC 21cOracle Provider for OLE DB
ADO.Net ConnectionOracle Client for Microsoft ToolsOracleClient Data Provider

You’ll need to install both the 32-bit and 64-bit drivers. The 32-bit is used in Visual Studio when developing SSIS packages. The 64-bit driver is used by default when executing packages on the SSIS server (unless you configure the package to execute in 32-bit mode).

In SSIS 2022, if you are reviewing SSIS logs and see errors related to an Oracle connection, and you see mention of ODBC and the Oracle Connection Manager, that is the Microsoft Connector for Oracle.

If you see errors in the SSIS logs related to an Oracle connection, and you see mention of OLE DB provider OraOLEDB.Oracle.1, that is the driver from ODAC.

I hope this makes clear your options and required drivers for connecting SSIS to Oracle.

Microsoft Technologies, Power BI

How to use the new dynamic format strings for measures in Power BI

The April 2023 release of Power BI desktop introduced a new preview feature called dynamic format strings for measures. This allows us to return values with different formats from the same measure. Previously, we needed to create calculation groups (usually by using Tabular Editor) to accomplish this. But now it is built in to Power BI Desktop.

This is great when you are converting to different currencies or switching measures in a report page. Let’s say you have a measure that returns different values based upon a selection in another visual on the page. For instance, I created a report that allows a consumer to choose a metric to be shown across multiple visuals. Depending on the selection, I may be returning an integer, a percentage, or a currency amount. If you can use parameters to switch measures, Power BI takes care of the measure formats for you. But if you had to write your own measure to handle more complex logic, you must handle the measure formatting on your own. I did this in the below report for Workout Wednesday 2023 Week 16 because I wanted a measure that consistently filtered to only the top N products that could be used in multiple visuals on the page and didn’t unnecessarily repeat DAX logic.

A Power BI report with slicers to select the top N products and a metric. The four charts switch values based upon the slicer selections.

The measure used in my visuals was called Value.

The formula bar in Power BI Desktop showing the measure definition for a measure called Value.
Value = 
VAR SelectedMetric =
    SelectedValue ( 'Metric'[Metric] )
VAR NumProducts = [Number of Products Value]
VAR TopNMetric = Calculate(
    [Measure Switcher],
    KEEPFILTERS(TOPN(NumProducts, ALLSELECTED(Product[Product]), [Measure Switcher])))
RETURN
    TopNMetric

In PBI Desktop version 2.116.622.0 or later, there is an option set the measure format string.

As of April 2023, you need to enable the preview feature. Go to File -> Options and Settings – > Options -> Global -> Preview Features. Enable the dynamic format strings for measures option. You will need to restart PBI Desktop for this to take effect.

Next, open your report, select your measure, and change the format to Dynamic.

The format selection drop-down now has an option labeled Dynamic at the bottom.

A new drop-down box will appear next to the formula bar.

The new drop-down next to the formula bar where users can choose Measure or Format.

By default, Measure will be selected and the measure definition will be shown. But you can now change the drop-down to Format to see the format expression. The format expression is written in DAX.

In the format expression below, if the selected metric is “# of Customers”, I formatted it as an integer with no thousands separator. If the selected metric is “Gross Margin %”, I formatted it as a percentage with two digits after the decimal. Otherwise, I formatted the metric as an integer with a thousands separator.

The formula bar showing the expression for the format expression. 
SWITCH(SELECTEDVALUE(Metric[Metric]),"# of Customers", "#",
"Gross Margin %","0.00%",
"#,###")

The end result is that any axes, values, data labels, and tooltips are formatted according to my expression, while maintaining the numeric data type.

A Power BI report with slicers to select the top N products and a metric. The four charts switch values based upon the slicer selections. The Gross Margin % measure is selected, and all axes and values show numbers formatted as percentages.

A few things to note

This dynamic measure format string is scoped to the particular measure. It is a common point of frustration when people learn calculation groups to accomplish a similar goal that calculation groups are scoped to the model and require logic that limits which measures they are applied to.

The SELECTEDMEASURE() function, which is commonly used in calculation items, can be used in the dynamic measure format string to reference the measure, but you can also just use the measure name in brackets.

There are some limitations at the moment that limit use with report measures and DirectQuery for Analysis Services.

If you need more practice or would like an example involving currency conversion, you’ll find a tutorial in the Microsoft documentation.

Azure, Azure SQL DB, Microsoft Technologies, SQL Server

How to Change the Browser Used by SSMS for AAD Auth

Did you know that you can change the browser used by SQL Server Management Studio to authenticate using Azure Active Directory to a SQL database in Azure?

I had been experiencing serious delays with the window that pops up to accept my credentials taking 30 seconds or more to populate. I also once got a warning that the browser I was using was old.

I recently learned that I can change the browser used for this purpose in the SSMS settings.

First, make sure you have updated SSMS to the latest version.

Then, go to Tools > Options. Select Azure Services from the menu on the left in the Options dialog. In the Miscellaneous section, you will see a setting titled “UseSystemBrowser”. By default, it will be set to false.

The Options dialog in SSMS showing the UseSystemBrowser setting

When you set this to true, the window to complete AAD authentication will open in the default browser on your machine. For instance, my default browser is set to Chrome (sorry, Edge 😉). So now my authentication window opens as a browser tab in my existing open Chrome window. Then I can enter my credentials as normal. And changing the browser fixed my issues with the authentication window being slow.

Microsoft Technologies

What to know about the new accessible Power BI themes

In February 2023, Microsoft released some new Power BI themes that are more accessible than the other themes available by default. The blog post mentions the prevalence of color vision deficiency (CVD, also called colorblindness) and discusses color contrast.

While color contrast is important for accommodating color vision deficiency, it’s also important for those with low vision. Color contrast (the way WCAG measures it) is largely about differences in luminance (relative brightness). In addition to general low vision and color vision deficiency there are several other common conditions that affect our ability to see data visualizations, including glaucoma and cataracts. There are also situational issues such as viewing a visualization in direct sunlight.

Everyone’s vision is a little different. It is rare (impossible?) that a color theme is accessible for everyone. For instance, while many people with color vision deficiency have trouble distinguishing red and green hues, others have trouble distinguishing blue hues. So when we optimize to accommodate one condition, we may make things more difficult for another condition. This happened with the change in accent color in Power BI Desktop from yellow to teal. Changing to teal increased color contrast, which was great for people with low vision, but it caused new issues for some people with color vision deficiency.

While I am very happy to see these new color themes, I hope everyone understands that they aren’t just generally accessible for all uses. As mentioned in the blog post, they specifically have better color contrast to achieve a contrast of at least 3:1, which is the contrast recommended by WCAG for non-text content.

One thing to understand is that the order of the theme colors matters. They are saying the color contrast between the first and second color are sufficient (3:1). But the contrast between the first and third colors is not. And the contrast between the second and sixth color are not. I had a friend with CVD look at the themes, and there were colors in each theme that were difficult to differentiate for him in general. But they were not colors next to each other in order.

Let’s look at Accessible City Park as an example.

Here’s the original theme.

Screenshot from Coolors.co showing the colors from the Accessible City Park theme in Power BI
Accessible City Park theme from Power BI

This theme includes reds and greens and blues and purples, but the similar colors are not next to each other.

Color contrast measurement showing the contrast between the first and third colors. The contrast is 1, which is below the requirement for text or graphical components.
Color contrast measurement between the first and third colors in the Accessible City Park theme
Color contrast measurement showing the contrast between the first and third colors. The contrast is 1.12, which is below the requirement for text or graphical components.
Color contrast measurement between the second and sixth colors in the Accessible City Park theme

I wouldn’t want to use the first and third colors next to each other, or in a way where I require the user to be able to tell the difference between them (for example, indicating statuses by using the colors as cell backgrounds in a table).

What to do and know

Here’s how I approach my use of color. I try to ensure good color contrast of components from their background (3:1 for graphical components and 4.5:1 for text). This is why you’ll usually see me use an off-white or light gray background with dark text and medium graph colors. I always try to use something other than (or in addition to) color to indicate meaning, like symbols or text. If I know someone in my intended audience has a specific condition such as color vision deficiency, I’ll optimize and test for that.

Here’s what I hope you got from this post:

  1. Having these new accessible themes is a good step forward, but that doesn’t mean the colors are globally accessible.
  2. Color contrast is important in creating accessible data visualizations, so having these themes available so people don’t have to come up with their own accessible color palettes is great.
  3. These new accessible themes were meant to be used in a specific way in order to improve accessibility. You can’t just use any two colors in the theme next to each other.

What do you think?

Have you tried the new themes? What do you think of them? Feel free to leave a comment with your thoughts.

Microsoft Technologies, Power BI, Power Query

Unpivot a matrix with multiple fields on columns in Power Query

I had to do this for a client the other day, and I realized I hadn’t blogged about it. Let’s say you need to include data in a Power BI model, but the only source of the data is a matrix that is output from another system. And that matrix has multiple fields populating the columns. An example of this is below. The matrix has fiscal year and product category on columns, vertical on rows, and the profit metric populating the values.

A matrix in Excel with Fiscal Year and Product Category on Columns and Vertical on Rows with Profit shown in the Measures.

You may know about the wonderful unpivot functionality in Power Query, but to handle this matrix, you first need to Transpose.

The steps to turn this matrix into a table are:

  1. Transpose the query.
  2. Remove the last column that contains the vertical totals.
  3. Filter out the “Totals” value in the second column, which contains the product categories.
  4. Use the first row as headers.
  5. Select the Fiscal Year, Product Category, and Metric columns. Select Unpivot Other Columns.
  6. Rename the Attribute column to Verticals.

Transposing a table changes categories into rows.

A query with fiscal year and product category on columns and vertical on rows is transposed. It becomes a query with fiscal year in the first column, product category in the second column, and the verticals become columns instead of rows.
Transposing a query in Power Query

We don’t need the totals columns or rows because Power BI can recalculate those, and we don’t want to double-count profits.

In order to unpivot, we need to promote the first row to column headers, so the first column is labeled Fiscal Year and the fourth column is labeled Vertical Z.

The first three columns are as they should be, but the verticals need to change from columns to rows. This is accomplished by unpivoting. But you only want to unpivot the verticals and leave the fiscal year, product category, and metric columns as they are.

Then make sure column names are user-friendly, and it’s finished. You may also opt to remove the Metric column and rename the value column, if there is only one metric.

A query with 4 columns: fiscal year, product category, verticals, profit
The finished product after transposing and unpivoting
Azure, Databricks, Unity Catalog

External tables and views in Azure Databricks Unity Catalog

I’ve been busy defining objects in my Unity Catalog metastore to create a secure exploratory environment for analysts and data scientists. I’ve found a lack of examples for doing this in Azure with file types other than delta (maybe you’re reading this in the future and this is no longer a problem, but it was when I wrote this). So I wanted to get some more examples out there in case it helps others.

I’m not storing any data in Databricks – I’m leaving my data in the data lake and using Unity Catalog to put a tabular schema on top of it (hence the use of external tables vs managed tables. In order to reference an ADLS account, you need to define a storage credential and an external location.

External tables

External tables in Databricks are similar to external tables in SQL Server. We can use them to reference a file or folder that contains files with similar schemas. External tables can use the following file formats:

  • delta
  • csv
  • json
  • avro
  • parquet
  • orc
  • text

If you don’t specify the file format in the USING clause of your DDL statement, it will use the default of delta.

Below is an example of creating an external table from a single CSV file.

CREATE TABLE mycatalog.myschema.external_table1
USING CSV
OPTIONS (header "true", inferSchema "true")
LOCATION 'abfss://containername@storageaccountname.dfs.core.windows.net/TopFolder/SecondFolder/myfile.csv';

Because I have used the LOCATION clause, this is an external table that stores just metadata. This SQL locates the specified file in my data lake and has Databricks create the schema based upon that file instead of me defining each column. Notice that I have specified in the options on the third line that there is a header row in my file and that Databricks should figure out the schema of the table.

Alternatively, I could explicitly define the columns for my external table. You can find the list of supported data types here.

CREATE TABLE mycatalog.myschema.external_table1 
(
  colA  INT,
  colB  STRING,
  colC  STRING
)
USING CSV 
OPTIONS (header "true") 
LOCATION 'abfss://containername@storageaccountname.dfs.core.windows.net/TopFolder/SecondFolder/myfile.csv';

External views

I had some JSON data in my lake that Databricks couldn’t automatically convert to a table so I created some external views. My data had a format similar to the below, with each document containing a single array that contained multiple objects, some of which were nested.

{
    "mystuff": [
        {
            "cola": "1",
            "colb": "2",
            "colc": "abc",
            "nestedthing": {
                "id": 1,
                "name": "thing1"
            }
        },
        {
            "cola": "2",
            "colb": "4",
            "colc": "def",
            "nestedthing": {
                "id": 22,
                "name": "thing22"
            }
        },
        {
            "cola": "3",
            "colb": "6",
            "colc": "ghi"
        }
    ]
}

The example view below directly queries a file in the data lake.

CREATE VIEW mycatalog.myschema.external_view1
select
  src.cola,
  src.colb,
  src.colc,
  src.nestedthing.id,
  src.nestedthing.name
FROM
  (
    select
      explode(mystuff) src
    FROM
      json.`abfss://containername@storageaccountname.dfs.core.windows.net/TopFolder/SecondFolder/myfile2.json`
  ) x

To reference the file in the data lake in the FROM clause of the query, we specify the file format first (JSON) followed by a dot and then the file path surround by backticks (not single quotes). If we needed to reference a folder instead we would just end the path at the folder name (no trailing slash is necessary).

The explode() function is great for turning objects in an array into columns in a tabular dataset. To access nested objects, you can use dot notation. If you need to parse more complex JSON, this is a helpful resource.

The query from the view above creates the following output.

A table containing 5 columns: cola, colb, colc, id, name.

I’m not sure yet if there are any consequences (performance? security?) of defining a view like this rather than first creating an external table. I couldn’t get the external table created without modifying the JSON files, which I was trying to avoid. I do the view produces the correct results. If you have experimented with this, let me know what you learned.

Azure, Azure Data Factory, Microsoft Technologies

Use the output of a Script activity as the items in a ForEach activity in Data Factory

In early 2022, Microsoft released a new activity in Azure Data Factory (ADF) called the Script activity. The Script activity allows you to execute one or more SQL statements and receive zero, one, or multiple result sets as the output. This is an advantage over the stored procedure activity that was already available in ADF, as the stored procedure activity doesn’t support using the result set returned from a query in a downstream activity.

However, when I went to find examples of how to reference those result sets, the documentation was lacking. It currently just says:

“For consuming activity output resultSets in down stream activity please refer to the Lookup activity result documentation.”

Microsoft Learn documentation on the Script activity

Populate the items in the ForEach activity

Similar to a Lookup activity, the Script activity can be used to populate the items in a ForEach activity, but the syntax is a bit different.

Let’s say we have a Script activity followed by a ForEach activity. The Script activity has a single result set.

A Script activity in Azure Data Factory with a ForEach activity

When I populate the items property of my ForEach activity, I use the following expression:
@activity('SCR_ScriptActivity').output.resultSets[0].rows

It starts similar to how we reference output from a Lookup activity. I reference the activity and then the output. But then instead of values I use resultSets[0].rows.

This makes sense when you look at the output from the activity.

{
   "resultSetCount":1,
   "recordsAffected":0,
   "resultSets":[
      {
         "rowCount":4,
         "rows":[
            {
               "colA":1
            },
            {
               "colA":2
            },
            {
               "colA":3
            },
            {
               "colA":4
            }
         ]
      }
   ],
   "outputParameters":{
      
   },
   "outputLogs":"",
   "outputLogsLocation":"",
   "outputTruncated":false,
   "effectiveIntegrationRuntime":"AutoResolveIntegrationRuntime (East US)",
   "executionDuration":1,
   "durationInQueue":{
      "integrationRuntimeQueue":1
   },
   "billingReference":{
      "activityType":"PipelineActivity",
      "billableDuration":[
         {
            "meterType":"AzureIR",
            "duration":0.016666666666666666,
            "unit":"Hours"
         }
      ]
   }
}

I want the output from the first (in this case, only) result set, so that’s resultSets[0]. The data returned is in the rows array in that result set. So that’s resultSets[0].rows.

How are you liking the script activity in Data Factory? Is there anything else you wish were included in the documentation? Let me know in the comments.

Microsoft Technologies, Power BI, Workout Wednesday

Clickable SVG images in Power BI using the HTML Content custom visual

People have done creative things with SVG measures in Power BI, ranging from KPI cards to infographics to fun games.

For my latest Workout Wednesday challenge, I used SVG measures to make holiday cards that open on a specified date.

Power BI report with 1 open Christmas card and 3 closed cards.
In the Power BI report for Workout Wednesday 2022 Week 48, the holiday cards are populated using SVG measures

When you click on one of the holiday cards, it navigates to a specified url. This was made possible by using the HTML Content custom visual.

The navigation to the URL is achieved by modifying the SVG code to include an href attribute. Depending on the placement of the href attribute, you can make one part of the SVG image or the entire image navigate to a URL when clicked.

Step by Step

To make a clickable SVG image for Power BI, there are 7 steps:

  1. Open the url in a text editor or html editor
  2. Replace all double quotes with single quotes
  3. Add href attribute around the content you want to be clickable
  4. Create a measure in Power BI and paste the contents of the SVG
  5. Add the HTML Content visual to a report page
  6. Populate the values of the visual with the measure
  7. In the format pane for the visual, set Allow Opening URLS to On.

For example, I have an SVG of a coffee cup.

coffee cup with steam coming out of it

If I open it in Notepad++ (you can also use Visual Studio code or another editor), it looks like this.

HTML for an SVG image opened in Notepad++

Because we are putting the contents in a DAX measure, we need to replace the double quotes with single quotes.

The Find and Replace dialog in Notepad++ set to find double quotes  and replace it with single quotes.

Then I add the href attribute. I want my entire image to navigate to my website (DataSavvy.me) when it is clicked. So I add <a href='https://datasavvy.me'> just after the opening <svg> tag, and I add a closing </a> at the end. Remember that the URL should be surrounded by single quotes rather than double quotes.

HTML code for an SVG image with an href attribute added.

Then I create a measure called SVG. I enter double quotes, paste the content from Notepad++, and add closing quotes as the end. Because I’m using the HTML content visual, I don’t have to add "data:image/svg+xml;utf8," at the beginning of my measure as I would if I were using this in a table visual.

Now I add the HTML Content visual and put my SVG measure in the Values field well.

Power BI Desktop showing a coffee cup image on a report page. The coffee cup visual is selected. The measure named SVG is placed in Values.

With the visual selected, I go to the formatting pane, expand the Content Formatting section, and turn Allow Opening URLs to On.

The format pane showing the Allow Opening URls option is set to on for the HTML Content visual.

When I hover over the image, the cursor changes, indicating the image is clickable.

A screenshot from Power BI Desktop with a cursor hovering over the image, indicating the image is clickable

When I click the image, I get a prompt to allow navigation to the url I put in the SVG.

A dialog in Power BI that says "You are about to navigate to: https://datasavvy.me. The options available are "OK" and "Cancel".

New possibilities unlocked

While static clickable SVGs are pretty cool, the potential is really in the fact that we can dynamically populate the SVG based upon data in our dataset. You can change the entire image or an attribute of the image (color, size, URL, etc.) based upon a slicer selection.

Now that you can make dynamic clickable images in Power BI, how do you plan to use them?