Azure, Azure Data Lake, Microsoft Technologies

Initial Thoughts on Dremio

I’ve been working on a project for the last few months with a client who has chosen to implement Dremio in Azure. Dremio is a data lake engine that creates a semantic layer and supports interactive queries.

Dremio logo
The Dremio logo

It uses Apache Arrow, Gandiva, and Parquet files under the hood. It runs on either Linux VMs or Kubernetes containers. Like most big data systems, there is at least one coordinator node and one or more executor nodes. These nodes communicate and are managed using Apache Zookeeper. Client applications connect to Dremio via ODBC, JDBC, REST APIs, or Arrow Flight. Dremio can read from storage accounts, external databases, and a few other sources.

Dremio stores data in the following places:

  • Metadata is stored in a RocksDB database on the coordinator node.
  • Frequently read data is cached on the executor node.
  • Memory-intensive query operations may cause an executor node to spill Arrow buffers from RAM to disk.
  • Reflections, user uploads, and query results are stored in the data lake.

Dremio is organized into spaces, which can contain folders and datasets. The key objects in Dremio are:

  • Data source – connection strings to data that should be accessed via Dremio
  • Physical Dataset – an HDFS directory or a database table
  • Virtual Dataset – a view of sorts, created using the Dremio UI or by writing SQL, that references one or more physical or virtual datasets and also provides lineage to its sources
  • Reflection – a materialized view that is transparent to users and is used to improve query performance, which seems to be implemented as Dremio querying data from the source and storing it as a parquet file for quicker access.
  • Space – a shared location for virtual datasets, a way to group related datasets and provide user access

Once you have your spaces and virtual datasets set up, it feels kind of like a database. If you connect with Power BI, virtual datasets appear as views and physical datasets appear as tables. Dremio metadata (catalogs, schemas, physical datasets, virtual datasets and columns) can be accessed using INFORMATION_SCHEMA queries, which is conveniently familiar if you are used to working with SQL Server.

Some nice features found in Dremio on Azure

  • Dremio allows Single Sign-On with AAD credentials. Permissions can be granted to individual users or AAD groups.
  • Dremio can be implemented in a virtual network in Azure. The executor nodes can use Private Link to access ADLS (Azure Data Lake Storage Gen 2) over a private endpoint.
  • Changes to virtual datasets are tracked in Dremio. It’s easy to revert to a previous version at any time.
  • Dremio gives you visibility to the jobs running queries, both for ad hoc queries from client tools and for refreshing reflections.
  • Administrators can create rules to assign queries to different queues in order to provide workload isolation and predictability for users.
  • When reviewing jobs, you can see a sort of query plan as well as which jobs were able to use a reflection to accelerate a query.
  • The lineage view for a virtual dataset is nice for understanding dependencies.
  • You can trigger refreshes of metadata or reflections via the Rest API, which is handy if you have ETL processes adding new data to your data lake, and you want to refreshes to occur at the end of the ETL process.

Some rough edges on Dremio in Azure

  • Dremio was initially built for AWS, not Azure. This is evident in the training materials, the product roadmap, and the knowledge of the Dremio implementation specialists. This is not to say it doesn’t work on Azure, just that the implementation is a bit rougher (e.g., no Azure templates made for you), and a couple of features are unavailable.
  • Dremio doesn’t integrate with Azure Key Vault. You store the service principal secret or storage account access key in a configuration file on the Linux VM. I’ve been told this is on the roadmap, but I didn’t hear a date when it would be available.
  • You can enable integration points on the Dremio website where you can click a button to open a connection to a virtual dataset in a BI tool such as Power BI or Tableau. For Power BI, this downloads a PBIDS file with a connection to that specific virtual dataset. This would be fine if everything you need is in this one dataset, but if you need to reference multiple virtual datasets, this is a bit annoying. Think of it like connecting to a specific database table instead of to the database in general. You might want to use that table, but you might also want to find other useful tables to combine in your Power BI model. You can open Power BI and connect to Dremio in general and navigate from there with no problems. I’m just pointing out that the buttons in the UI don’t seem that useful.
  • Dremio doesn’t support passthrough authentication on ADLS. All queries to the data lake are made in the context of the Dremio application, not the individual user. This means that you may need to set permissions twice for your data lake if you have other tools directly accessing the data lake instead of using Dremio. The idea is that most tools will connect through Dremio to take advantage of the semantic layer. But it would be nice to have, just to simplify security.

Advice we received in training

  • Unlike with nesting views in SQL Server, it’s ok to create multiple layers of virtual datasets. You want to design the semantic layer (the virtual datasets) to reuse common logic instead of repeating it across multiple views.
  • The standard design pattern for the semantic layer is to have a layer of “staging views” that have a 1-to-1 mapping to physical datasets and very little transformation outside of fixing data types and light cleansing. On top of the Staging layer is the Business layer, which includes virtual datasets containing business logic. The Business layer should handle most of the query workload. On top of the Business layer is the Application Layer. This includes virtual datasets that are purpose-built to support specific applications or reports.
  • Star schemas are not optimal in Dremio. You likely want to denormalize even more than that. This is because it is more expensive to perform a join than to search through a large number of values in a column.
  • When creating a reflection, setting the sort column is somewhat like creating an index in a SQL database. It helps prune data when applying a query filter or performing a join.
  • Reflections can be used to partition data. If you find you have a single large file, you can use a Reflection to split it by a low cardinality value to improve query performance. When you do this, it creates a parquet file per partition.
  • Reflections can be set to use an incremental refresh, but only if the data is additive and existing data is not updated.
  • You don’t need a reflection for everything. Make them as small and reusable a possible.
  • Try to avoid thousands of tiny files, and aim for a few medium to large files (MBs to GBs). This is common for most data lake engines as there is an overhead cost for file enumeration.

Some other thoughts

  • Dremio advertises that you don’t need data integration processes like you would for a data warehouse. I find this to be somewhat inaccurate for two main reasons. First, if you need to acquire data from APIs or other applications to which Dremio can’t connect, you will still need to copy data to your data lake. Second, when you use a Reflection to speed up a query, you are creating a copy of the data in your data lake stored as one or more Parquet files. Data virtualization technology hasn’t actually matured to the point of not needing ETL at all. I can see how Dremio would lessen the need for ETL, but let’s recognize that you’ll probably still need some and that Dremio is doing a bit of data loading of it’s own. So the question becomes where — and with what tools — you would like to do this. You can have Dremio do your transforming and loading in the form of reflections, or you can load your own data already transformed to the data lake. You will likely end up with a bit of both over time.
  • Consider the skillsets of the people who will manage the system, as well as those who will build and query the datasets. If you have a team of admins who only know Windows, they are going to need to skill up on Linux. If your BI team or analysts don’t know SQL, they will probably struggle to build the virtual datasets.
  • This system can get pretty expensive pretty fast (which is true of most big data systems). You’ll want to be sure to automate the shutdown of the nodes in dev and test environments when they are not in use, so you can save a bit of money. And remember that you can size up your nodes later if you find you don’t have adequate performance. Oversizing at the outset will waste money.
  • Dremio is a (well-funded) startup with a product that is built on several open source technologies, and they don’t seem to have a public roadmap. In my experience, they have been good about taking feedback to add to the roadmap and with sharing what is soon to be released. But if you are building your company’s BI strategy with Dremio as a key tool, you probably want more than that. It sounds like they share more with paying customers. I would want that information before making a purchasing decision.
  • Overall, I can see why Dremio has been adopted by several large companies. And I have enjoyed setting up the Azure architecture around it and building virtual datasets. I wish they would add some Azure-specific features to optimize things and make security easy, but it’s a promising platform.

More Information about Dremio

If Dremio sounds interesting to you, here are a few helpful links

This was my first project using Dremio. If you’ve used Dremio, please share your experience in the comments.

Azure, Azure Data Lake, Microsoft Technologies, Power BI

Granting ADLS Gen2 Access for Power BI Users via ACLs

It’s common that users only have access to certain folders in an Azure Data Lake Storage container. These permissions are provided not through Azure RBAC (role-based access control) roles but through POSIX-like ACLs (access control lists).

The current Power BI documentation mentions only Azure RBAC roles, but it is possible to connect to a folder with permissions granted through ACLs.

You can manage ACLs through the Azure Storage Explorer application or in the Storage Explorer preview in the Azure Portal. As an example, I have a storage account with the hierarchical namespace enabled. In the container named filesystem1 is a folder called Test. Test contains 3 files, and I want a user to import Categories.csv into Power BI.

Azure Storage Explorer showing the mmldl storage account with filesystem1 selected. The Test folder in filesystem1 is selected and 3 files are shown.
Data lake storage account with files located in a folder called Test

If I select the Test folder and then select Manage Access, I can see that an AAD user named Data Lake User has been granted access and default ACLs. Note that the user needs at least Read and Execute. Write isn’t necessary if they don’t need to change the file.

The Manage Access window in Azure Storage Explorer. The user named Data Lake User is selected. Access and Default permissions are set to give the user Read, Write, and Execute.
Managing access on the Test folder for the Data Lake Access user

But with those permissions on the Test folder, I’m not able to connect to it from Power BI Desktop. If I try, I’ll get an error that says “Access to the resource is forbidden.”

Power BI error that says "Unable to connect. We encountered an error while trying to connect. Details: Access to the resource is forbidden."
Power BI error encountered when a user doesn’t have sufficient permissions to access a file in the data lake

This is because the user is missing some permissions. We need to grant Execute permissions on all parent folders up to the root (the container).

In this case, there is only one level above my Test folder. So I select the filesystem1 container, go to Manage Access, and grant it Execute permissions.

Manage Access window in Azure Storage Explorer showing permissions for Data Lake user on filesystem1. Execute is selected for both Access and Default permissions.
Adding Execute permissions to the parent container

Note that changing the Default ACL on a parent does not affect the access ACL or default ACL of child items that already exist. So if you have existing subfolders and files to which users need access, you will need to grant access at each parent level because the default ACLs won’t apply.

Thanks to Gerhard Brueckl for noting that I needed Execute permissions on parent folders when I got stuck in testing.

If you find yourself hitting that access forbidden message in Power BI when accessing a file in ADLS Gen2, double check the user’s Execute permissions on the parent folders.

Azure, Azure Data Factory, Azure Data Lake, Microsoft Technologies

I Like to Move It, Move It – But Azure Data Factory Doesn’t

Note: This post is about Azure Data Factory V1
I’ve spent the last couple of months working on a project that includes Azure Data Factory and Azure Data Warehouse. ADF has some nice capabilities for file management that never made it into SSIS such as zip/unzip files and copy from/to SFTP. But it also has some gaps I had to work around. My project involved copying data from on-premises SQL Server to an ORC file in a data lake staging area for ingestion into an Azure SQL Data Warehouse through Polybase. Then I had planned to move that file to the a raw area of the data lake for archiving.

In other words, as sung below by a great lemur, I like to move it.

But at this time ADF doesn’t support that. You can copy a file with a copy activity, but you cannot actually move (i.e., copy and delete).

Luckily, we had a workaround for our situation. If you tell ADF to copy data to a file that already exists in the specified location in the data lake, it will overwrite the existing file. We made sure the file name is always the same for each table in the staging area so there is always only one file per table.

What we ultimately ended up with was:

Azure Data Flow

  1. Retrieve time sliced data from on-premises SQL Server source via the Data Management Gateway.
  2. Land data in the Raw area of the data lake as ORC file.
  3. Copy file to staging.
  4. Execute stored procedure to populate data warehouse through Polybase.

I landed the data in Raw first so that we would not have to pull from SQL again if we needed to re-run a slice. Data latency wasn’t a huge issue for this client – we had some pipelines that ran hourly and some that ran daily. The extra seconds it took to land the file in Raw was not a concern.

For now, if you do need to actually move or delete, you can use a custom C# activity to delete files. I chose not to do this because I didn’t want to add another technology for the client to learn/manage while adopting Azure. This may be the way to go for other projects.

If you think moving (copying and deleting) files should be a first class citizen in Azure Data Factory, please vote for the idea and spread the word for others to vote.

You don’t have to thank me for getting that song stuck in your head for the rest of the day.

Azure, Azure Data Factory, Azure Data Lake, Biml, Microsoft Technologies

Copying data from On Prem SQL to ADLS with ADF and Biml – Part 2

Note: This post is about Azure Data Factory V1
I showed in my previous post how we generated the datasets for our Azure Data Factory pipelines. In this post, I’ll show the BimlScript for our pipelines. Pipelines define the activities, identify the input and output datasets for those activities, and set an execution schedule. We were creating several pipelines with copy activities to copy data to Azure Data Lake Store.

We generated one pipeline per schedule and load type:

  • Hourly – Full
  • Hourly – Incremental
  • Daily – Full
  • Daily – Incremental

We also generated some one-time load pipelines for DR/new environment setup.

The first code file below is the template for the pipeline. You can see code nuggets for the data we receive from the generator file and for conditional logic we implemented. The result is one copy activity per source table within the appropriate pipeline.

In the second code file below, lines 104 to 119 are generating the pipelines. We read in the necessary data from the Excel file:

  • Schema name
  • Table name
  • Columns list
  • Incremental predicate

Sidenote: We wrote a quick T-SQL statement (not shown) to generate the columns list. This could have been done in our BimlScript, but it was something we changed after the fact to accommodate the limitations of Polybase (Dear Microsoft: Please fix). SQL was quicker and easier for us, but if I were to do this again I would add that into our BimlScript. We needed to replace new lines and double quotes in our data before we could read it in from the data lake.  You can get around this issue by using .ORC files rather than text delimited files. But the ORC files aren’t human readable, and we felt that was important for adoption of the data lake with the client on this project. They were already jumping in with several new technologies and we didn’t want to add anything else to the stack. So our select statements list out fields and replace the unwanted characters in the string fields.

Our Excel file looks like this.

ADF Biml Metadata

Columns B, C, L, and M are populated by Excel formulas. This is the file that is read in by the BimlScript in the code below.

In our generator file (which is the same file that was used to generate the datasets), we use the CallBimlScript function to call the pipeline template file and pass along the required properties (table, schema, frequency, scope, columns list, predicate).


<#@ import namespace="System.Data" #>
<#@ import namespace="System.Text" #>
<#@ property name="targetTables" type="DataView"#>
<#@ property name="frequency" type="string"#>
<#@ property name="scope" type="string"#>
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.Pipeline.json&quot;,
"name": "PL_Copy_MySourceDBToADLS_<#=frequency#>_<#=scope#>",
"properties": {
"description": "<#=frequency#> <#=scope#> copies of data from Source db to the data lake.",
"activities": [
<# var isFirst = true; foreach( DataRowView rowView in targetTables) {#>
<# DataRow row = rowView.Row; #>
<# string schemaName = row["SchemaName"].ToString();#>
<# string tableName = row["TableName"].ToString();#>
<# string columnList = row["ColumnListForSelect"].ToString(); #>
<# string predicate = row["IncrementalPredicate"].ToString(); #>
<#=isFirst ? "" : ","#>
{
"name": "Copy to Lake – <#=schemaName#>.<#=tableName#>",
"type": "Copy",
"inputs": [
{
"name": "DS_OnPremSQL_MySourceDB_<#=schemaName#>_<#=tableName#>"
}
],
"outputs": [
{
"name": "DS_DataLake_MySourceDB_<#=schemaName#>_<#=tableName#>"
}
],
"typeProperties": {
"source": {
"type": "SqlSource",
<# if (scope == "Full") {#>
"sqlReaderQuery": "SELECT <#=columnList#>, SYSDATETIME() AS LoadDateTime FROM <#=schemaName#>.[<#=tableName#>]"
<#} else if (scope == "Deltas" && frequency == "Hourly") {#>
"sqlReaderQuery": "$$Text.Format('SELECT <#=columnList#>, SYSDATETIME() AS LoadDateTime FROM <#=schemaName#>.[<#=tableName#>] WHERE <#=predicate#>', Time.AddHours(WindowStart, -5), Time.AddHours(WindowEnd, -5))"
<#} else if (scope == "Deltas" && frequency == "Daily") {#>
"sqlReaderQuery": "$$Text.Format('SELECT <#=columnList#>, SYSDATETIME() AS LoadDateTime FROM <#=schemaName#>.[<#=tableName#>] WHERE <#=predicate#>', WindowStart, WindowEnd)"
<# } #>
},
"sink": {
"type": "AzureDataLakeStoreSink"
}
},
"policy": {
"concurrency": 1,
"executionPriorityOrder": "OldestFirst",
"retry": 3,
"timeout": "01:00:00"
},
"scheduler": {
<# if (frequency == "Daily") {#>
"frequency": "Day",
"offset": "09:00:00",
<#} else if (frequency == "Hourly") {#>
"frequency": "Hour",
<# } #>
"style": "EndOfInterval",
"interval": 1
}
}
<# isFirst = false; }#>
],
<# if (frequency == "Hourly") {#>
"start": "2017-03-01T01:00:00",
<#}else {#>
"start": "2017-03-02T00:00:00",
<#}#>
"end": "9999-09-09"
}
}


<#@ template tier="10" #>
<#@ import namespace="System.Data" #>
<#@ import namespace="System.Text" #>
<#@ code file="BGHelper.cs" #>
<#@ import namespace="BGHelper" #>
<Biml xmlns="http://schemas.varigence.com/biml.xsd"&gt;
</Biml>
<#
string mdFilePath = "C:\\Users\\admin\\Source\\Workspaces\\Data Warehouse\\metadata";
string mdFileName = "TargetTableMetadata.xlsx";
string mdWorkSheetName = "Metadata$";
bool mdHasHeader = true;
string logPath = "C:\\Users\\admin\\Source\\Workspaces\\Data Warehouse\\data_factory\\generate_data_factory_biml\\log.txt";
string adfProjPath = "C:\\Users\\admin\\Source\\Workspaces\\Data Warehouse\\data_factory\\data_factory\\";
DataSet ds = new DataSet();
ds = ExcelReader.ReadExcelQuery(mdFilePath, mdFileName, mdWorkSheetName, mdHasHeader);
System.IO.File.AppendAllText(@logPath, "MetaData File Path: " + System.IO.Path.Combine(mdFilePath, mdFileName).ToString() +Environment.NewLine );
System.IO.File.AppendAllText(@logPath, "MetaData File Path: " + System.IO.File.Exists(System.IO.Path.Combine(mdFilePath, mdFileName)).ToString() +Environment.NewLine );
System.IO.File.AppendAllText(@logPath, "Dataset table count: " + ds.Tables.Count.ToString() + Environment.NewLine);
DataView dailyFulls = new DataView(ds.Tables["Metadata"],"Frequency = 'Daily' and [Changes Only] = 'No'","", DataViewRowState.CurrentRows);
DataView dailyDeltas = new DataView(ds.Tables["Metadata"], "Frequency = 'Daily' and [Changes Only] = 'Yes'", "", DataViewRowState.CurrentRows);
DataView hourlyFulls = new DataView(ds.Tables["Metadata"], "Frequency = 'Hourly' and [Changes Only] = 'No'", "", DataViewRowState.CurrentRows);
DataView hourlyDeltas = new DataView(ds.Tables["Metadata"], "Frequency = 'Hourly' and [Changes Only] = 'Yes'", "", DataViewRowState.CurrentRows);
//log count of results for each filter
System.IO.File.AppendAllText(@logPath, "Daily Fulls Count: " + dailyFulls.Count.ToString() + Environment.NewLine);
System.IO.File.AppendAllText(@logPath, "Daily Deltas Count: " + dailyDeltas.Count.ToString() + Environment.NewLine);
System.IO.File.AppendAllText(@logPath, "Hourly Fulls Count: " + hourlyFulls.Count.ToString() + Environment.NewLine);
System.IO.File.AppendAllText(@logPath, "Hourly Deltas Count: " + hourlyDeltas.Count.ToString() + Environment.NewLine);
//Generate datasets
foreach (DataRowView rowView in dailyFulls)
{
DataRow row = rowView.Row;
string schemaName = row["SchemaName"].ToString();
string tableName = row["TableName"].ToString();
string frequency = row["Frequency"].ToString();
string scope = "full";
System.IO.File.AppendAllText(@logPath, "DailyFulls | " + row["SchemaName"].ToString() + "." + row["TableName"].ToString() + " | " + row["ColumnListForSelect"].ToString() + Environment.NewLine);
System.IO.File.WriteAllText(@adfProjPath + "DS_OnPremSQL_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_OnPremSQL_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency));
System.IO.File.WriteAllText(@adfProjPath + "DS_DataLake_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_DataLake_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency, scope));
}
foreach (DataRowView rowView in dailyDeltas)
{
DataRow row = rowView.Row;
string schemaName = row["SchemaName"].ToString();
string tableName = row["TableName"].ToString();
string frequency = row["Frequency"].ToString();
string scope = "deltas";
System.IO.File.AppendAllText(@logPath, "DailyFulls | " + row["SchemaName"].ToString() + "." + row["TableName"].ToString() + " | " + row["ColumnListForSelect"].ToString() + Environment.NewLine);
System.IO.File.WriteAllText(@adfProjPath + "DS_OnPremSQL_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_OnPremSQL_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency));
System.IO.File.WriteAllText(@adfProjPath + "DS_DataLake_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_DataLake_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency, scope));
}
foreach (DataRowView rowView in hourlyFulls)
{
DataRow row = rowView.Row;
string schemaName = row["SchemaName"].ToString();
string tableName = row["TableName"].ToString();
string frequency = row["Frequency"].ToString();
string scope = "full";
System.IO.File.AppendAllText(@logPath, "DailyFulls | " + row["SchemaName"].ToString() + "." + row["TableName"].ToString() + " | " + row["ColumnListForSelect"].ToString() + Environment.NewLine);
System.IO.File.WriteAllText(@adfProjPath + "DS_OnPremSQL_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_OnPremSQL_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency));
System.IO.File.WriteAllText(@adfProjPath + "DS_DataLake_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_DataLake_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency, scope));
}
foreach (DataRowView rowView in hourlyDeltas)
{
DataRow row = rowView.Row;
string schemaName = row["SchemaName"].ToString();
string tableName = row["TableName"].ToString();
string frequency = row["Frequency"].ToString();
string scope = "deltas";
System.IO.File.AppendAllText(@logPath, "DailyFulls | " + row["SchemaName"].ToString() + "." + row["TableName"].ToString() + " | " + row["ColumnListForSelect"].ToString() + Environment.NewLine);
System.IO.File.WriteAllText(@adfProjPath + "DS_OnPremSQL_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_OnPremSQL_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency));
System.IO.File.WriteAllText(@adfProjPath + "DS_DataLake_MySourceDB_" + schemaName + "_" + tableName + ".json", CallBimlScript("DS_DataLake_MySourceDB_Schema_Table.biml", schemaName, tableName, frequency, scope));
}
// Generate pipelines
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_Daily_Full.json", CallBimlScript("PL_Copy_MySourceDBToADLS.biml", dailyFulls, "Daily", "Full"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_Daily_Deltas.json", CallBimlScript("PL_Copy_MySourceDBToADLS.biml", dailyDeltas, "Daily", "Deltas"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_Hourly_Full.json", CallBimlScript("PL_Copy_MySourceDBToADLS.biml", hourlyFulls, "Hourly", "Full"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_Hourly_Deltas.json", CallBimlScript("PL_Copy_MySourceDBToADLS.biml", hourlyDeltas, "Hourly", "Deltas"));
//Generate One-Time Pipelines
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_OneTime_DailyFulls.json", CallBimlScript("PL_Copy_MySourceDBToADLS_OneTime.biml", dailyFulls, "Daily", "Full"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_OneTime_DailyDeltas.json", CallBimlScript("PL_Copy_MySourceDBToADLS_OneTime.biml", dailyDeltas, "Daily", "Deltas"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_OneTime_HourlyFulls.json", CallBimlScript("PL_Copy_MySourceDBToADLS_OneTime.biml", hourlyFulls, "Hourly", "Full"));
System.IO.File.WriteAllText(@adfProjPath + "PL_Copy_MySourceDBToADLS_OneTime_HourlyDeltas.json", CallBimlScript("PL_Copy_MySourceDBToADLS_OneTime.biml", hourlyDeltas, "Hourly", "Deltas"));
#>

The great thing about Biml is that I can use it as much or as little as I feel is helpful. That T-SQL statement to get column lists could have been Biml, but it didn’t have to be. The client can maintain and enhance these pipelines with or without Biml as they see fit. There is no vendor lock-in here. Just as with Biml-generated SSIS projects, there is no difference between a hand-written ADF solution and a Biml-generated ADF solution, other than the Biml-generated solution is probably more consistent.

And have I mentioned the time savings? There is a reason why Varigence gives out shirts that say “It’s Monday and I’m done for the week.”

We made changes and regenerated our pipelines a few times, which would have taken hours without Biml. With Biml, it was no big deal.

Thanks to Levi for letting me share some of his code, and for working with me on this project!