Skip to content

The White Pages Posts

Working with Time Zones in the Power BI Relative Time Slicer and Filter

In the April 2020 release of Power BI Desktop, A new preview feature was debuted which provides an easy way to filter your report down to a rolling time period through the relative time slicer and filter. If you’ve tried this feature, you may have noticed that the results are not exactly what you might expect, unless you live in a very specific time zone. This article will show you how to design around this behaviour.

The problem is that the relative time that is evaluated by these two controls is always evaluated against UTC times. It assumes therefore that the time that you provide to it is also in UTC. If you are a report author working with local time values in a single location, this behaviour may seem confusing. Both the filter and the slicer work the same way, so for our purposes we’ll just be showing the filter here.

As an example of this, I collect data from a weather sensor in 1 minute increments, and have done so for several months. I have a report that shows the temperature over time, and I want to build a report page that shows this information for the past 24 hours. The relative time filter, applied to the page is the perfect control for doing this. It should also be noted here that this data is collected in the Eastern Time Zone, which in the summer (as I write) is offset from UTC time by -4 hours. The result is a report page that looks something like the below, with the filter applied.

You ca see above that although the current time is 9:39 AM, and a 24 hour relative time filter is applied, we are only seeing results after 1:39 PM from the previous day. This is because the supplied value for time is local, not UTC, and 9:39 AM EDT corresponds to 1:39 PM UTC. The filter is working, but it’s not showing the results that we expected.

The solution to this problem is straightforward, we need to use a different field that has bee converted to UTC for the filter, while continuing to display local time in the chart. There are many ways to do this with Power BI, the best will depend on your model design, but if you want to make the change using DAX, you can create a calculated column with a formula similar to:

TimeFieldUTC = TimeFieldLocal + UTCOffset/24

In my specific case above (EDT), the formula is:

ReadingTimeUTC = Time + 4/24

Another approach is to use Power Query to create a new column at refresh time. The Power Query new column formula to do the equivalent is:

[TimeFieldLocal] + #duration(0,UTCOffset,0,0)

Or again, in my specific case:

[Time] + #duration(0,4,0,0)

Once you have your corresponding UTC time values, simply replace the relative time filter fields with that field. The filter will be comparing UTC values to UTC values, and all will be well. The charts and display values can continue to use local times.

This approach does have a flaw in that the report needs to be edited to account for Daylight Savings Time/Standard Time transitions. Taking the Power Query approach allows us to use parameters, which can be changed in the service without editing the data model, but that still requires manual intervention. I would really like to see the ability of Power BI to understand time zones by name, and to be able to account for Daylight savings time changes. In other word to call a function with a time, and a time zone name parameter, and have it return a time using -4 as an offset in the summer, and -5 in the winter. In the absence of that, this approach will have to do.

1 Comment

Formatting the X Axis in Power BI Charts for Date and Time

Dates and times are probably the most commonly used dimensions in Power BI charts, or any charts for that matter. Power BI contains a number of features that help to display data with respect to time. Features like the automated date hierarchy reduce the need for users to construct or connect to a date dimension table (even though they likely should), which helps casual users get to solution more quickly. This is particularly true of using date/time on the axis of a chart. There are a lot of options for displaying this data, and they may not all be that well understood. This article will attempt to explain a number of them.

The scenario

We will be working with thermostat data taken in 5 minute increments over several months. The shape of the data is relatively simple. There are measures for outdoor temperature and heating/cooling system run times in seconds, as well as a date/time dimensions names DateAndTime. An example can be seen below

We want to plot these runtimes over time, and we will be working with a “Line and clustered column chart” to do this. The 4 different heating/cooling runtimes are used for the column values, the Outdoor temperature is used for the line values (with average being the default aggregation behaviour). This gets us to our point – what is the best way to display time on the X axis?

Plotting with DateTime

When the DateAndTime column is added to the X axis, by default it is converted to a date hierarchy. This behaviour is on by default but can be turned off (and in many cases, should be). We initially want to work with the raw datetime value, so we can control that by setting the dropdown option in the shared axis section of the chart and selecting the name of the dimension instead of “Date Hierarchy”.

Doing this with our data results in a rather messy looking chart.

The data here is far too granular to display all of it across all of the available times. By default, using a date or datetime dimension on an X axis will display continuously like this. However, we can control this behaviour through an X axis property on the chart itself.

Opening up the chart display properties, and then opening the X axis section reveals that “Continuous” is selected for the Type property. This is the display mode that will scale the axis to include all available date/time values. The other option is “Categorical”. The Categorical option displays each date/time value as a discrete data element. Changing the axis type property to continuous results in the chart appearing as follows.

The continuous and categorical options are only available for date and date/time dimensions. If the dimension in the X axis is not one of these types, the “Type” option will not appear.

Using Continuous, each and every date and time value is displayed on the X axis, and the data values are clearly resolved. However, in our case, there are far too many values to make this useful. Finding what we’re after would take a lot of scrolling. It’s best in this case (and in most cases) to view the data in aggregate, which is to say totals and averages across different time periods, years, months, days etc. This is where the Date Hierarchy shows value.

Formatting with Date Hierarchy

Selecting our “DateAndTime” dimension back to “Date Hierarchy” immediately changes the chart to show all of the data aggregated by Year. It is also possible to see the detail of the hierarchy in the Shared axis property for the chart.

The top level of the hierarchy is shown, which is all of the data aggregated to the Year level.

I rarely use the “Quarter” level of the hierarchy, so I simply remove it, and have done so for the remainder of the operations. It can be removed simply by selecting the x beside it in the Shared axis property box.

If we want to see our data in a more granular fashion, we have three options – Drill down, Go down one level, and Expand all down one level, which are the icons listed left to right in the highlighter section in the image above. Drilling down is meant to be interactive. With Drill down selected, clicking on the data point in the chart will go down to the next level in the chart for that data point. It replaces the standard cross filtering or cross highlighting that would normally happen when selecting a data point. For example, with drilldown turned on, clicking on any column for 2019 results in the chart below.

Notice that the X axis now shows month names instead of the year. This cart is showing our measures by month now, but only for the year 2019. The up arrow in the upper left arrow can be selected at any time to go back up to year, or selecting one of the months will drill down further to show the values for all of the days in the selected month.

The second option, Go down one level behaves in a similar fashion, but it does not filter to the year, it simply takes the chart down one level in the hierarchy without first filtering by year. This could be useful when comparing months to each other in aggregate. The X axis changes in the same way as drill down, showing the values for that level of the hierarchy.

If we want to show the data more granularly than the year level, but we don’t want to aggregate all of the same month names together, we can use the third option – Expand all down one level, or as I like to call it, “drill down and out”. Selecting this option results in the chart below.

We can see the data broken out by both year and month in order. This is a much richer view and still understandable. For example, you can see that 2018 was generally warmer than 2019 due to the amount of cooling necessary at a glance. The title is automatically changed (if it wasn’t set manually) to reflect this configuration, and the X axis also shows both year and month.

In this particular example the X axis is still readable, but drilling down and out more than one level can be cumbersome, and very wordy. At the same time, you do need to know which year, month, and day a particular data point pertains to. The Z axis formatting pane has some further options that help with this. By default, all of the hierarchy levels are concatenated together when a hierarchy is expanded in this way. Going into the chart format tab, and selecting the X axis, we can see an option for this – “Concatenate Labels”. Turning this off presents each level categorically on different lines. This to my mind is much easier to read and is the configuration that I use.

The concatenate labels option only takes effect when a hierarchy is expanded past its root level.

The examples used above utilize a “Line and clustered column bar chart” but pertain to all of the standard visuals that employ an x and y axis.

6 Comments

Increase your OneDrive Storage Limit beyond 5 TB

Note – 2021-01-26 – This article is still totally valid, but check the comment section below for an alternate method for getting your tenant enabled for > 5 TB storage.

OneDrive for Business offers “unlimited” storage. You would be excused if you were sure that the limit was 1 TB because that is what it is set to by default, Microsoft would prefer it if you didn’t exercise this particular option. Individual users can’t change their limits, and administrators can only up that limit to 5 TB. Increasing it beyond that limit requires extra steps. I have just gone through those steps for my own OneDrive for business, and thought that I would share the experience.

The stratification of the different OneDrive for Business storage options has been outlined effectively by Joel Oleson in his article Three Tiers to Increase to Unlimited Storage in OneDrive for Business.

The bottom line is that you must increase the quota limit for a OneDrive in stages. These stages are:

  1. Increase quota from 1 TB to 5TB
  2. Reach 90% of the OneDrive storage limit (4.5 TB)
  3. Open a support ticket to turn on “Boost Quota” for the tenant
  4. Increase quota from 5 TB to 25TB
  5. Reach 90% of the OneDrive storage limit (22.5 TB)
  6. Open a support ticket to turn create a new site collection with 25 TB quota
  7. Repeat steps 5 & 6 as necessary

I have just completed step 4, so that’s as far as this post will go. It likely goes without saying, but all of the operations below require tenant admin permissions.

Increase from the default limit to 5 TB

To increase a user’s OneDrive limit from the default of 1 TB to 5TB, it is a simple matter of running a PowerShell command, as documented in Change a specific user’s OneDrive storage space.

Set-SPOSite -Identity <user's OneDrive URL> -StorageQuota 5242880‬

The number 5,242,880‬ (5 TB) used for the storage quota must be precise.

The default value for the tenant itself can be changed as well, so that this first step isn’t necessary for new users. See Set the default storage space for OneDrive users for details. It should be noted that the maximum value that you can set as a default is 5120 (5TB). If it’s set any higher, it won’t be saved.

Increasing storage limit to 25 TB

Before you can set the limit any higher, you must first fill the OneDrive to 90% of its capacity, or 4.5 TB. This happened to me a few weeks ago and I started getting weekly “approaching your capacity” messages. At this point I opened a support ticket, and this is where the fun started.

I’ll spare you the back and forth email exchange, but a little snippet of the conversation went something like the following. Each line is an action or an email:

Me:          fills out form explicitly stating that I have hit 90% and need quota increased from 5 TB to 25 TB
Support: Please reply with a screenshot of the problem, and any troubleshooting steps
Me: I need my OneDrive quota increased from 5 to 25 TB
Support: You need OneDrive Plan 2 for that, and your OneDrive must be at 98% (both incorrect)
Me: I have E5, which includes unlimited storage (I ignored the 98% comment)
Support: What capacity is it at now?
Me: 91%
Support: Send the OneDrive URL
Me: sends OneDrive URL
Support: Your tenant may not have the Boost storage option enabled. Let me ask my supervisor to get that enabled.
Support (one day later): Would you like instructions?
Me: Yes please
Support: sends publicly available url listed above that increases quota with Powershell
Support: First change it to 10 TB, then change it to 25 TB (no idea where that came from)
Me: tries it, doesn't work for 10 or 25
Me: It didn't work
Support: Did you connect to the SharePoint Online module first? (seriously)
Me: Yes. This isn't my first rodeo. (I'm paraphrasing)
Support: Can you send a screenshot of your error message?
Me: There is no error message, the value simply does not save.
Me: This approach works up to 5 TB but not beyond
Support: Let me look further into this and get back to you.
Support: Your request had now been passed to our escalations team
Support (5 days later, different rep): We enabled boost storage. Can you try and let us know?
Me: Successful. Thank you.

I include the above partly because you might want the chuckle, but mostly to let you know not to give up in this. It’s advertised, and you paid for it.

The command that I used to enable this, once support had turned on the boost storage feature was:

Set-SPOSite -Identity <user's OneDrive URL> -StorageQuota 26214400

The number 26,214,400 (25 TB) must be used precisely.

In any event, after running the above command, my storage limit is now at 25 TB.

The particularly interesting thing to note here is that because the “Boost storage” feature is set at the tenant level, any other OneDrives in the tenant can have their limits increased without contacting support. All that is necessary is the PowerShell script above. However, the drive must still reach 90% capacity before it can be increased.

Should I hit the next limit, I’ll report back here.

8 Comments

A low cost approach for paginated report subscriptions in Power BI

Paginated reports in Power BI offer a rich set of capabilities for printing, and the generation of report documents. Paginated reports can be exported to multiple document formats, and those exports can be scheduled and delivered via email. Unfortunately, for the moment at least, paginated reports require the use of a dedicated capacity, which can be cost prohibitive for some. This article describes a pattern that will help minimize the cost of using paginated reports for subscriptions.

Dedicated capacities

The term “Premium” is often used to describe the resource (as opposed to user) based licensing option in Power BI. More accurately, this option is called “dedicated capacity”, and there are a number of different ways to use dedicated capacity. Premium is certainly one of those ways, but the Azure (A) SKU and the EMbedding (EM) SKU are two others.

The different SKUs have different features and capabilities, but for the purposes of this article, they are functionally equivalent. For a detailed discussion of the different dedicated capacity SKUs, see my older post Understanding the Power BI capacity based SKUs.

The “A” SKU is of particular interest, because unlike the other two SKU types, it is billed in hourly increments as opposed to monthly. If the capacity can be started, perform a task, and then stopped, then you will only be billed for the time used to perform the task in question. Matt Allington has a post that outlines this concept in great detail: Affordable Power BI Premium for Small Business .

The process

In our scenario, we have a paginated report that uses a published Power BI dataset as a data source. as of this writing, there is no API call available to render a paginated report on demand, so we will rely on the scheduled subscription capability. in order to minimize the cost of the solution, we want the dedicated capacity to run as little as possible.

The solution will consist of an Azure logic app, and Power BI paginated report scheduling. An Azure logic app uses the same set of actions that a Flow in Power automate does, but is a little more flexible in its permissions model.

The overall process flow is as follows:

  1. The Logic App runs on a schedule, and starts the dedicated capacity by calling the Power BI API from an HTTP action
  2. The Logic app uses the “Refresh a dataset” Power BI action to initiate a dataset refresh. Note that this is only necessary if the report’s data souce is a Power BI dataset.
  3. On a schedule that allows for the dataset above to be refreshed, the report schedule runs and delivers the report to the destination addresses.
  4. Another Logic App runs on a schedule that allows for both the refresh of the data in step 2, and the report to be rendered in step 3, that pauses the capacity. Ideally this should be less than a hour after the capacity was started in step 1, given the hourly billing increments.

Optionally, further automation can run on the destination inbox to deliver the report to alternate locations, such as a SharePoint library, etc.

Create the capacity

Prior to doing any of this, you will need to create a Power BI dedicated capacity in Azure. In Azure, the service is called “Power BI Embedded” and detailed instructions for creating it can be found in this document.

Although there are 6 possible sizes to choose from, paginated reports in the Power BI service require an A4 capacity at a minimum. It is therefore important to select the “Change size” link and choose a size that is A4 or greater.

Once created, any relevant workspaces can be assigned to this capacity. It should be noted that the capacity must be created in the same tenant as the Power BI service itself.

Starting and Refreshing

Creating an Azure Logic app is relatively straightforward. The steps involved in doing so can be found in this introductory document. Once created, we will add a schedule trigger, an HTTP action, and a “Refresh a dataset” action. The complete Logic app is below.

Logic app to start the capacity and refresh a dataset

The recurrence step will kick off the run on a scheduled basis. You can set the recurrence to a number of time periods, and the run time will be based on that start date/time. In the example above, the process runs every day at 2:50 AM

The HTTP step calls into Azure to start the capacity. The request is a POST request, and all that it requires is a single URL.You must provide 3 configuration values for it to work. The URL itself is:

https://management.azure.com/subscriptions/{SubscriptionGUID}/resourceGroups/{ResourceGroup}/providers/Microsoft.PowerBIDedicated/capacities/{CapacityName}/resume?api-version=2017-10-01

Where:

  • {SubscriptionGUID} is the GUID of the Azure subscription for the capacity
  • {ResourceGroup} is the name of the Resource Group for the capacity
  • {CapacityName} is the name of the capacity

The other important item here is the Authentication setting. There are a number of options here, and your choice will match your requirements. For this example, we are using the relatively new “Managed Identity” in Azure. Managed identity allows one service (the Power BI capacity) to grant direct access to another (our Logic app). In this way, the Logic app is granted permission to start and stop the capacity.

More information on managed identities can be found in this Microsoft document.

The final action in this app is the “Refresh a dataset” action. Once added, you provide your credentials, select a workspace, and the the dataset to be refreshed. The logic app waits for the HTTP action to finish before it tries to refresh the dataset, so you do not need to worry about adding pauses.

Subscription

The subscription for the paginated report needs to be set to run some time after the dataset above has refreshed if using a dataset as a data source, or after the capacity has started if not. There is nothing unique about subsctriptions for this process, but you need to be aware of the timing. The subscription must run and complete in the time between the starting and refresh of any datasets, and when the companion logic apps fires to pause the capacity.

As of this writing, subscriptions can only be delivered to mailboxes within the same tenant as the Power BI service itself.

To create a subscription, select the “Subscribe” button in the upper right of the report toolbar.

Next, complete the subscription options. Note that in this example, the subscription is set to run at 3:15 AM, which is 25 minutes after the start capacity Logic app fires, allowing for plenty of time to start the capacity, and to refresh the dataset that this report connects to.

Pausing the capacity

A second Logic app is required to pause the capacity. It needs to be scheduled to allow for the subscription to be sent, ideally within an hour of the capacity starting so that only one hour of usage is billed.

The Logic app to pause the capacity is very similar to the one to start it, with a different URL being called.

In our example, the Recurrence trigger is set to run daily at 3:45 AM, which is 55 minutes after the start app.

The HTTP action settings are identical to those in the startup logic app, with the exception that the suspend method is called instead of the resume method. The full URL is below.

https://management.azure.com/subscriptions/{SubscriptionGUID}/resourceGroups/{ResourceGroup}/providers/Microsoft.PowerBIDedicated/capacities/{CapacityName}/resume?api-version=2017-10-01

Where:

  • {SubscriptionGUID} is the GUID of the Azure subscription for the capacity
  • {ResourceGroup} is the name of the Resource Group for the capacity
  • {CapacityName} is the name of the capacity

Summary

While paginated reports are currently a “Premium” offering, it is possible to use automation techniques, along with the Power BI Embedded Azure service to only run the service for an hour a day, turning a cost of hundreds per day into several dollars per day.

By taking advantage of the report rendering capabilities that paginated reports affords, and by building the reports on top of pre-existing Power BI dataset, paginated reports can become a print engine for the analytical reports.

3 Comments

Keep your Power BI SharePoint reports current with Microsoft Flow

Power BI is without question the best way to report on data in SharePoint lists. The query tools available in Power Query make working with SharePoint data relatively painless, an the cached dataset means that reports are run against an optimized copy of the list data, not the data itself.

This latter distinction, while removing the performance issues of systems that query lists directly, also introduces problems with data latency. The report will never be fully “up to date”, as it needs to be refreshed on a periodic basis.

Consider the following scenario. A Power BI report has been built that uses data from a SharePoint list. That report has been embedded on a SharePoint page in the same site. A user adds an item to the list, and then navigates to the page to see the updated report. Unfortunately, that report won’t get updated until the next scheduled refresh.

This has been a significant problem, until the recent release of the new “Refresh a dataset” action in Microsoft Flow.

It is a relatively simple procedure to add a simple 1 step flow to any SharePoint list that is triggered when an item is created, updated, or deleted. This flow simply needs to add the “Refresh a dataset” action, that is configured for the relevant dataset, and these embedded reports will be updated very shortly after the data is modified.

Alternatively, the flow can be triggered by a timer, allowing you to create your own schedule (every 5 minutes, etc) that is not hardwired to run at the top or bottom of any given hour.

A few caveats should be kept in mind when using this action however.

While this action gives us much finer grained control over when refreshes happen, all of the current license restrictions remain in place. For datasets located in the shared capacity, only 8 refreshes per day are allowed.

For datasets in dedicated capacities (Premium), there are no limits to the number of refreshes. The limit of 48 per day is a UI restriction, not a licensing restriction. However, refresh can utilize significant resources, particularly memory, so you’ll want to ensure that you have significant resources to support the update frequency.

Finally, the load on the source data system should be considered. Refresh will pull a significant amount of data every time it is run.

Caveats aside, this new flow action is a welcome relief to those that need greater control of how their reports are updated.

3 Comments