Skip to content

Month: May 2015

Using Excel With External Data – What’s the Right Tool?

Excel has been used with external data for… well, as long as I’ve been using Excel. So why would anyone bother to write a blog post about this given that the capability is so mature? In recent years, Excel has adopted a number of new, and frankly better mechanisms for working with external data, while retaining the old. Given that there are now multiple tools in Excel for working with external data, it’s not always clear as to which one is the best, and unfortunately there is no single tool that wins over all, although I believe that that will be the case soon.

The answer, as always is, “it depends”. When it depends, the important thing is to understand the strengths and weaknesses of each approach. With that said, let’s have a look at all of the options.

ODC Connections

ODC (Office Data Connections) are the traditional method of accessing data in Excel. You can create or reuse an ODC connection from the Data tab in the Excel ribbon.

When using an ODC connection, you establish a connection with a data source, form some sort of query and import the resultant data directly into the Excel workbook. From there, the data can be manipulated and shaped in order to support whatever the end user is trying to do. The one exception to this behaviour is the connection to SQL Server Analysis Services (SSAS). When a connection is made to SSAS, only the connection is created. No data is returned until an analysis is performed (through a pivot table, chart etc), and then only the query results are retrieved.

When the workbook using an ODC connection is saved, the data is saved within it. In the case of an SSAS connected workbook, the results of the last analysis are saved along with it. For small amounts of data, this is just fine, but any large analysis is bound to quickly run into the data limits in Excel which is 1,048,576 rows by 16,384 columns in Excel 2013. In addition such a file is very large and extremely cumbersome to work with, but even as such, Excel has been the primary tool of choice for business analysts for years.

Data loaded into the workbook can be refreshed on demand, but it can also be altered, shaped, mashed up, and as is too often the case, grow stale. Workbooks such as these have become known as “spreadmarts” and are the scourge of IT and business alike. With these spreadmarts, we have multiple versions of the same data being proliferated, and it becomes harder to discern which data is most accurate/current, not to mention the governance implications.

SharePoint has provided a way to mitigate some of the concerns with these connections. SharePoint itself supports ODC connections, and therefore users can access these workbooks stored within SharePoint and it also allows them to refresh data from the source either on demand or on open. A single point of storage along with a measure of oversight and browser access helps to restore a modicum of sanity to an out of control spreadmart environment, but the core issues remain.

In order to help with the core issues, Microsoft introduced PowerPivot in 2009.

PowerPivot Connections

Created in PowerPivot

PowerPivot was originally (and still is) an add-in to Excel 2010, and is a built in add-in to Excel 2013. PowerPivot allows for the analysis of massive amounts of data within Excel, limited only by the memory available to the user’s machine (assuming a 64 bit version). It does this by highly compressing data in memory using columnar compression. The end result is that literally hundreds of millions of rows of data can be analyzed efficiently from within Excel.

You can see that compression at work by comparing the same data imported into an Excel workbook directly, and into a PowerPivot model with a workbook. The following two files contain election data, and represent the maximum number of rows that Excel can handle directly (1,048,576) and 25 columns.

Getting data into the model was originally (and still can be) a completely separate process from bringing it into Excel. PowerPivot has its own data import mechanism, accessed from the Power Pivot window itself. First, click on the PowerPivot tab in Excel and then click manage. If you don’t have a PowerPivot tab, you will need to enable the add-in. If you don’t have the add-in, you have an earlier version of Excel – you’ll need to download it.

Once the PowerPivot window opens, the “Get External Data” option is on the ribbon.

Once the appropriate data source is selected and configured, data will be loaded directly into the data model – there is no option to import that data into a worksheet. Once the data is in, pivot tables and pivot charts can be added to the workbook that connect to the data model much like when creating an ODC connection to Analysis Services. In fact, it’s pretty much exactly like connecting to Analysis services, except that the AS process is running on the workstation.

Created in Excel

PowerPivot, and more importantly the tabular data model was included in Excel 2013. With that addition, Microsoft added a few features to make the process of getting data into the data model a little easier for users that were a little less tech savvy, and may be uncomfortable working with a separate PowerPivot window. That’s actually part of the thinking in leaving the PowerPivot add-on turned off by default.

When a user creates an ODC connection as outlined above, there are a couple of new options in Excel 2013. First, the “Select Table” dialog has a new checkbox – “Enable selection of multiple tables”.

When this option is selected, more than one table from the data source can be selected simultaneously, but more importantly, the data will automatically be sent to the data model in addition to any other import destinations.

Even if the multiple selection option wasn’t chosen, the next dialog in the import process, “Import Data” also has a new check box – “Add this data to the Data Model”.

Its purpose is pretty self-explanatory. It should be noted that if you choose this option, and also choose “Only Create Connection”, the data will ONLY be added to the model, nowhere else in the workbook. This is functionally equivalent to doing the import from the PowerPivot window, without enabling the add-in.

Power Query Connections

When Power BI was originally announced, Power Query was also announced and included as a component. This was very much a marketing distinction, as Power Query exists in its own right, and does not require a Power BI license to use. It is available as an add-on to both Excel 2010 and 2013, and will be included with Excel 2016.

Power Query brings some Extract, Transform and Load (ETL) muscle to the Excel data acquisition story. Data can be not only imported and filtered, but also transformed with Power Query and its powerful M language. Power Query brings many features to the table, but this article is focused on its use as a data acquisition tool.

To use Power Query, it must first be downloaded and installed. Once installed, it is available from the Power Query tab (Excel 2010 and 2013).

Or from the data tab, New Query (Excel 2016)

Once the desired data source is selected, the query can be edited, or loaded into either the workbook, the data model, or both simultaneously. To load without editing the query, the load option at the bottom of the import dialog is selected.

Selecting “Load To” will allow you to select the destination for the data – the workbook, the model or both. Selecting Load will import the data to the default destination, which is by default the workbook. Given the fact that the workbook is an inefficient destination for data, I always recommend that you change their default settings for Power Query.

To do so, select Options from the Power Query tab (2010 and 2013) or the New Query button (2016), click the Data Load section, and then specify your default settings.

Data Refresh Options

In almost every case when external data is analyzed, it will need to be refreshed on a periodic basis. Within the Excel Client, this is simple enough – click on the data tab, and then the Refresh All button, or refresh a specific connection. This works no matter what method was used to import the data in the first place. Excel data connections can also be configured to refresh automatically every time the workbook is opened, or on a periodic basis in the background.

However, workbooks can also be used in a browser through Office Web Apps and Excel Services (SharePoint and Office 365) or as a data source for Power BI dashboards. In these cases the workbooks need to be refreshed automatically in order that the consuming users will see the most up to data when the workbooks are opened. The tricky part is that not all of the connection types listed above are supported by all of the servers or services. Let’s dive in to what works with what.

SharePoint with Excel Services

Excel Services first shipped with SharePoint 2007, is a part of 2007, 2010, and will be included with 2016. From the beginning, Excel Services allowed browser users to view and interact with Excel workbooks, including workbooks that were connected to back end data. The connection type supported by Excel Services is ODC, and ODC only.

Excel Services has no mechanism for maintaining data refresh. However, the data connection refresh options are supported which means that the workbook can be automatically refreshed when opened, or on a scheduled basis (every xxx minutes in the background). Unfortunately, this can come with a significant performance penalty, and once refreshed it is only in memory. The workbook in the library is not updated. The data in the workbook can only be changed by editing the workbook in the client, refreshing it, and re-saving it

Workbooks with embedded data models (PowerPivot) can be opened in the browser, but any attempt to interact with the model (selecting a filter, slicer, etc) will result in an error unless PowerPivot for SharePoint has been configured.

SharePoint with Excel Services and PowerPivot for SharePoint

PowerPivot for SharePoint is a combination of a SharePoint Service application and Analysis Services SharePoint mode. When installed, it allows workbooks that have embedded PowerPivot data models to be interacted with through a browser. The way that it works is that when such a workbook is initially interacted, the embedded model is automatically “promoted” to the Analysis Services instance, and a connection is made with it, thus allowing the consuming user to work with it in the same manner as with a SSAS connected workbook,

The PowerPivot for SharePoint service application runs on a SharePoint server and allows for individual workbooks to be automatically refreshed on a scheduled basis. The schedule can be no more granular than once per day, but the actual data within the model on disk is updated, along with any Excel visualizations connected to it.

When the refresh process runs, it is the functional equivalent of editing the file in the client, selecting refresh all, and saving it back to the library. However, there is one significant difference. The Excel client will refresh all connection types, but the PowerPivot for SharePoint process does not understand Power Query connections. It can only handle those created through the Excel or PowerPivot interfaces.

Power Pivot for SharePoint ships on SQL Server media, and this limitation is still true as of SQL Server 2014. At the Ignite 2015 conference in Chicago, one of the promised enhancements was Power Query support in the SharePoint 2016 timeframe.

Office 365

Office 365, or more precisely, SharePoint Online supports Excel workbooks with ODC connections and PowerPivot embedded models in a browser. These workbooks can even be refreshed if the data source is online (SQL Azure), but they cannot be refreshed automatically. In addition, only ODC and PowerPivot connections are supported for manual refresh. Power Query connections require Power BI for Office 365. In addition, Office 365 imposes a 30 MB model size limit – beyond that, the Excel client must be used. In short, the Office 365 data refresh options are very limited.

Power BI for Office 365

Power BI for Office brings the ability to automatically refresh workbooks with embedded data models. Data sources can be on premises or in the cloud. On premises refresh is achieved through the use of the Data Management Gateway. It also raises Office 365’s model size limit from 30 MB to 250 MB. With Power BI for Office 365 both manual and automatic refreshes can be performed for both PowerPivot and Power Query connections, however Power Pivot connections are currently restricted to SQL Server and Oracle only.

The automatic refresh of ODC connections is not supported. A workbook must contain a data model in order to be enabled for Power BI.

Power BI Dashboards

Power BI Dashboards is a new service, allowing users to design dashboards without necessarily having Office 365 or even Excel. It is currently in preview form, so anything said here is subject to change. It is fundamentally based on the data model and it works with Excel files as a data source currently, and it is promised to use Excel as a report source as well. The service has the ability to automatically refresh the underlying Excel files on a periodic basis more frequent than daily.

In order for a workbook to be refreshed by Power BI, it must (at present) be stored in a OneDrive or OneDrive for Business container. It also must utilize either a PowerPivot, or a Power Query connection. At present, the data source must also be cloud based (ie SQL Azure) but on premises connectivity has been promised.

SQL Server Analysis Services

Another consideration, while not a platform for workbooks is SQL Server Analysis Services (SSAS). Excel can be used to design and build a data model, and that data model can at any time be imported into SSAS. As of version 2014, SSAS fully supports all connection types for import – ODC, PowerPivot and Power Query. Once a data model has been imported into SSAS, it can be refreshed on a schedule as often as desired, and you can connect to it with Excel, and share it in SharePoint. You can also connect to it in Power BI Dashboards through the SSAS connector. From both a flexibility and power standpoint, this is the best option, but it does require additional resources and complexity.

Refresh Compatibility Summary

For convenience, the table below summarizes the refresh options for the different connection types.

 

ODC

PowerPivot

Power Query

Excel Client

M

M

M

SharePoint/Excel Services

M

SharePoint/Excel Services/PP4SP

M

A

SQL Server Analysis Services Import

A

A

A

Office 365

M

M

Office 365 with Power BI

A*

A

Power BI Dashboards

A

A

M – Manual refresh

A – Both Manual and Automatic Refresh

* only limited data sources

 

The Right Tool

I started out above by saying that the selection of import tool would depend on circumstances, and that is certainly true. However, based on the capabilities and the restrictions of each, I believe that a few rules of thumb can be derived. As always, these will change over time as technology evolves.

  1. Always use the internal Data Model (PowerPivot) when importing data for analysis.

     

  2. Power Query is the future – use it wherever possible

    All of Microsoft’s energies around ETL and data import are going into Power Query. Power Query is core to Power BI, and announcements at the Ignite Conference indicate that Power Query is being added to both SQL Server Integration Services and to SQL Server Reporting Services. Keep in mind that we have been discussing only the data retrieval side of Power Query – it has a full set of ETL capabilities as well, which should also be considered.

  3. PowerPivot or ODC Connections must be used on premises

    PowerPivot for SharePoint does not support Power Query for refresh. This means that you MUST use PowerPivot connections for workbooks with embedded models. If you are already using SSAS, use an ODC connection within Excel.

  4. Power Query or PowerPivot must be used for cloud BI.

    PowerPivot connections will work for a few limited cases, but more Power Query support is being added constantly. Where possible, invest in Power Query

  5. If on-premises, consider importing your models into SSAS

    SSAS already supports Power Query. If, instead of using PowerPivot for SharePoint, Analysts build their models using Excel and Power Query, they can be “promoted” into SSAS. All that is then required is to connect a new workbook to the SSAS server with an ODC connection for end users. The Power Query workbooks can be used in the cloud, and the SSAS connector in Power BI Dashboard can directly use the SSAS models created.

  6. Choose wisely. Changing the connection type often requires rebuilding the data model, which in many cases is no small feat.

In summary, when importing data into Excel, the preferred destination is the tabular model, and to import data into that model, Power Query is the preferred choice. The only exception to this is on premises deployments. In these environments, consideration should be given to connecting to a SSAS server, and failing that, PowerPivot imports are the best option.

21 Comments

Power BI Licencing Demystified

Well, that’s a pretty ambitious title.

Power BI is currently an add-on to Office 365 and requires SharePoint Online in order to work. In January 2015, a new version of the Power BI Service was announced that removes the dependency on SharePoint Online, but will continue to leverage it if it is available. At the same time, a complete overhaul to the licensing model was announced. The licensing changes were widely welcomed, but they do raise a number of questions as to what license will be required when. The new version of the dashboard will be launched in the second half of 2015, so for now, we are still dealing with the original licencing model. Update – Power BI for Office 365 pricing has been updated to reflect the new model pricing and is available now (May 2015)

Power BI v1 Licensing

Power BI v1 is an add-on to SharePoint Online in Office 365 that among other things, adds the ability for Excel Online to work with data models larger than 30 MB (originally this limit was 10 MB) up to 250 MB, and to be able to automatically update data models stored in Excel from cloud based data sources, or from on-premises data through the Data Management Gateway. In order to take advantage of this capability, the end user needs a Power BI license, and this license carried a cost of approximately $20/user/month. I have seen it reported in many places that the cost of Power BI was $40/user/month. Indeed, there was a Power BI SKU that cost approximately this much (it’s referred to as Standalone), but this SKU also included a license for SharePoint Online, so really, the licenses were one and the same. You either have a license for Power BI or you don’t. Details of the Power BI for Office 365 are here, and they have already been updated to reflect the new pricing.

If you found yourself in an organization that had some users with licenses, and some without, you may have discovered some interesting behaviour. The Power BI service always leverages workbooks stored in SharePoint document libraries. These workbooks are available to all Office 365 users, whether or not they have a license. Users without a license can’t use the mobile client, Power Q&A, the gallery view or schedule data refreshes, but they certainly can open the workbook and interact with it. Well, they can until they hit the data size limit. Beyond 30 MB, the unlicensed users will receive a message indicating that their license is insufficient to view that file in a browser. However they can always download a copy and work with it that way.

This is an important distinction to note, because in this scenario, a licensed user can Power BI enable a workbook and schedule a daily data refresh. Once that data is refreshed, the unlicensed user gains the benefit from the refresh, and can interact with the workbook in a browser if it is smaller than 30 MB, or in the Excel client if it is larger.

Power BI v2 Licensing

When Power BI v2 (or Power BI Dashboards) was announced in January 2015, a new freemium pricing model was introduced. Power BI was available in a standalone fashion (no longer shackled to SharePoint Online), and could be had for either free, or for $9.99/user/month for the Pro edition. The detail and differences for the two editions can be found here. In addition, because Power BI will also continue to work with SharePoint Online, there will also be a SKU for the “Standalone” version at $17.99/user/month. I find the term “standalone” to be highly confusing here because this is in fact a license that contains a SharePoint Online licence – pretty much the opposite of standalone, but I digress. The comparisons leave a number of unanswered questions, which I hope to answer here.

One of the new concepts introduced with this new model is the Data capacity limit. This limit bears explanation. It is a per-user limit and it is cumulative. Free users are allocated 1 GB and Pro users are allocated 10 GB. Previously, the only limit was per-model (file by file), and that limit was 250 MB, and there was no total capacity limit per user. This is a significant difference.

Another thing worth pointing out here is that the 250 MB model size limit still exists. As with the Office 365 service, no single model can be larger than 250 MB.

What do you do if your model is larger than 250 MB? This new version of Power BI will allow connections to on-premises data. At the moment, on-premises connections are restricted to SSAS tabular models only through the SSAS connector, but more are coming. On-premises data connections don’t count toward any of the capacity limits. However, on-prem connections will require a Pro edition license.

The per-user capacity limits are cumulative, which is simple enough to understand for one given user. A user with a free license could have 4 x 250 MB models and reach their limit. A Pro user would need to have 40 of the same model to reach their limit. However, what happens when a user shares a dashboard with another user? Since it is really just the connection that is being shared, and the models are still being created per-user, the consuming user will need to utilize their own storage, and therefore it will be counted against their limit. If I share my 250 MB model with you, it will count against your total capacity limit.

What happens when a Pro user shares content with a free user? There are two possible outcomes – it will either work, or not. If the data model utilizes any of the Pro only features, the free user will not be able to consume it. For example, if the workbook has been scheduled to receive updates more than once per day, receives data from on-premises sources (SSAS Connector, Data Management Gateway) or utilizes any Pro only features, the consuming user will not be able to access it.

There are still a few unanswered questions with this new model, and as they are addressed, I’ll try to keep this post up to date.

2 Comments

Ignite 2015 Impressions

I don’t normally do conference summaries, but Ignite was just so big, and there was so much information that I felt the need to record my thoughts around it, and decided to share. Ignite was very much cross product, which is in line with where Microsoft seems to be headed – a focus on the function, not the tooling. With around 24,000 people in attendance, the conference, and the logistical issues that it imposed was too big for my taste, but the amount of information was excellent, and I imagine that I’ll be digesting it for some time to come. For now, here’s how I interpreted it all.

Azure and Office 365

Cloud services are killing it.

Between Azure’s Platform as a Service (PaaS) and Infrastructiure as a Service (IaaS) and Office 365’s Software as a Service (SaaS), Azure Active Directory is already sporting over 450 million active users. Azure Active Directory is what is used by Office 365, and the accounts within are otherwise known as Organizational Accounts. It’s an important metric because I believe that the Microsoft strategy is to own identity online. It makes sense when you look at what they seem to be doing.

For years, they absolutely dominated operating systems. Nothing to this day has ever really touched them on the desktop, but Apple changed the base with mobile, and developers flocked there. Google tried to do the same thing to Apple, and has been quite successful, but not fully so. While Android is in the majority in the mobile space, iOS is still quite strong, and shows no signs of diminishing. Windows isn’t really a factor in mobile, but still dominates the desktop which remains significant (about 300 million units/year), and is a factor on tablets. Microsoft got flanked by Apple and Android, and is holding the fort, but not conquering any new territory.

Microsoft now seems to be focusing on cloud services, and they don’t care what platform is being used to consume them. I think that at the core of this strategy is cloud identity – whether it is consumer (Microsoft Account) or enterprise (Azure Active Directory). With this identity strategy, Microsoft is attempting to again change the base – to outflank both Apple and Google and make the operating system almost irrelevant. Every app they’re putting out now is usually for iOS first, then Android, then Windows Phone. The new Universal app platform likely means that they will come out for Windows (desktop, phone, whatever) at initial launch with iOS, but the bottom line is that an awful lot of effort is going into supporting all platforms all the time. If the apps work well across platforms, then the choice of operating system simply becomes one of personal preference, not of features. It gets marginalized, and Microsoft owns the back end service. That’s why I think that so much effort has gone into this strategy.

Another thing that I sensed at the show was that in the past, all of the talk around identity and federation (ADFS) was about bringing your on-premises identities into the cloud to support a few new services. Now, there seems to have been a real shift, and the reason for adopting ADFS is to bring the Azure Active Directory identities back down on-premises to where legacy applications can use them. It’s a subtle shift, but discernable.

One of the more interesting product introduced into Azure recently is Logic Apps. As far as I can tell, Logic Apps are the cloud manifestation of BizTalk, which is an excellent product with a steep learning curve. Logic apps remove the learning curve and allow you to quickly connect and flow data through multiple systems. The session on logic apps can be seen here:

SharePoint 2016

In the past, SharePoint announcements would warrant their own post, but now SharePoint is probably best seen as part of a greater whole. Details on SharePoint 2016 details were first announced at Ignite, and I feel that the most informative session was Bill Baer’s on Wednesday morning where he outlined the major architectural changes:

Not surprisingly, this release will be very much about hybrid Sharepoint/Office 365 scenarios. Some of the notable items from the talk are:

  • SharePoint Server 2016 Will require 64 bit Windows Server 2012 or Widows Server 10, and SQL Server 2014 SP1 as a minimum
  • Standalone installations are no longer supported. It will be possible to install SharePoint and SQL Server on the same machine, but full SQL Server will be required, and SQL Express will no longer be supported. This obviously raises questions about whether or not there will be a free SharePoint Foundation SKU with the next release.
  • PerformancePoint will in fact be included with SharePoint 2016. I doubt very much that there will be any investments in it at all, but it will at least be there. I’d view this as legacy support.
  • SharePoint 2016 will support SAML claims as a first class citizen. That means that it will be possible to login with Azure Active Directory credentials, and is an example of bringing cloud identities on prem. However, don’t trash that domain controller just yet, I’m sure that service accounts will still need to be NTLM – SQL Server needs it.
  • There will be a new Roles Based installation. It will be much simpler to install and maintain servers with specific roles such as web front end, search, etc. BI will be one of the roles.
  • There will be new boundaries. Content databases up to Terabyte sizes, 10 GB file size limit, list thresholds of much greater than 5000 items (although how much greater was not specified)
  • No more FIM. The user profile engine that we’ve all grown to….. deal with from SharePoint 2010 and 2013 is no longer embedded. The full Forefront Information Manager can be used, but the default profile import mechanism will be the good ol’ User import from SharePoint 2007.
  • Durable resource based links. Every object in SharePoint will receive its own resource based URL. That means that it can be moved around in the farm, and reference URLs will still work. This is like permalinks in WordPress.
  • While not final, a preview was shown of some operational reporting. This is primarily “speeds and feeds” type information that would interest a farm administrator, although simple usage reporting could be seen.
  • Integration with the Office Graph – see below section on Delve.

SQL Server 2016

The next release of SQL Server was announces at Ignite. Its chock full of new things, focused primarily at hybrid operation and analytics. One of the more interesting concepts in this version is the ability to “stretch” a database into the cloud. With this, you can take an on-premises database, and extend it into Azure SQL, specifying rules to determine which data goes where. Given that online storage is significantly cheaper than on-premises, this makes total sense, and they’ve figured out a way to make it work reliably. The overall SQL Server keynote can be found here:

I’m very interested in the analytics capabilities, and the session outlining the improvements to SQL Server BI is found here:

I found the following items particularly notable:

  • A comment was made during the BI session that Microsoft is “Super Committed” to SQL Server Reporting Services (SSRS). Hopefully this helps quell the naysayers. SSRS is receiving a major facelift in this version, bringing a modern design experience. In addition, the parameters pane has received a great deal of attention, adding, among other things, support for cascaded dropdowns.
  • Datazen is a visualization company based in Toronto that was recently acquired by Microsoft. There is a good demo of Datazen in the session, and I highly recommend watching it. It will be included with SQL Server 2016.
  • Datazen has KPIs. It also has “sub-KPIs”. I’m not sure about you, but that sounds a lot like a scorecard to me. This may sound the eventual (see the SharePoint section) death knell for PerformancePoint, given that that’s about all that it uniquely provides to the BI stack.
  • Tabular models in SSAS (and presumably PowerPivot) will support many-many relationships and a host of other new features.
  • Tabular models in SSAS and PowerPivot will have time intelligence built in. No longer will separate time intelligence tables be required. It’s an open question however as to how extensible they will be and when.
  • SharePoint will allow browser editing on PowerPivot embedded workbooks. Currently, you need to launch Excel to edit a PowerPivot embedded workbook.

Office 365 Groups

I attended the roadmap on Office 365 Groups:

(video unavailable as of posting – should be shortly)

During this session, the light really went on for me. Groups was (were? Not sure about the grammar on this…it’s a name) introduced last year and appeared to be a glorified distribution list with Sharepoint artifacts. However, its about to become the center of the Office 365 collaborative experience. It ties together Azure Active Directory objects, a SharePoint site collection, One Note, Skype, and OneDrive into a single cohesive, non-customizable experience. It currently uses Exchange exclusively for social conversations, but full Yammer integration is promised. No date was given for the integration, but my guess is that the target is early 2016.

The current User interface is limited – too limited for my own use at the moment, but during the demonstration, a rather useful interface was shown that is coming soon. You can access groups presently through the Outlook web client in Office 365. I’m running Office 2016 preview on my laptop, and there is a very nice interface contained there. There was chatter, particularly in the Yammer community about confusion as to what tool should be used when, but I think that the coming deep integration of Yammer into Groups will render this point moot’

The next UI, demonstrated in the above session looks really good, and offers a lot of benefits. There is also a mobile app coming very shortly for, you guessed it, iOS and Windows Universal, then Android.

One unanswered question from the show is whether Groups would be available on-premises.

Power BI

Power BI content was sort of sprinkled throughout the conference, without specific focus. There was a session on the new DAX features available in Power BI Designer that is worth a watch from a modeling perspective:

One talk that really impressed me was by Lukasz Pawlowski and Josh Caplan entitled Power BI for Developers:

They cover content packs are mentioned, real time analytics, and an in depth analysis of the “how old” app that went viral during Build.

It was also announced in the SQL BI session that SSRS will in fact be included in Power BI shortly, although little detail was provided. Finally, for development, the best place to get started is http://dev.powerbi.com.

It should also be noted that Power BI was at the center of almost any analytics discussion during the conference. This is by no means a little side project.

Delve/Office Graph

Delve is a newer product in Office 365 that provides insights around what content is relevant in an organization, and how people interact with it. It’s available directly from the app launcher in Office 365, and recently, user profiles have moved to the Delve application. It’s powered by the Office Graph, which in essence an advanced index that contains content from Exchange and SharePoint, and will very shortly, be extensible for multiple content types. The roadmap session for Delve/Graph is available here:

During the session, it was stated that “Delve is the evolution of Enterprise Search”. Given that all of the work on Delve and the Graph is coming from Oslo and the former team from FAST search, this just makes sense. One of the major announcements around SharePoint 2016 was that SharePoint 2016 content can be crawled by the Office Graph to provide both search results and Delve results in Office Graph. The reverse will also be true in that the on-premises crawler will be able to index Office 365 content for search results, but Delve and the Graph will remain in Office 365. The surprise here was that later this year, it will be possible to do the same thing with Sharepoint 2013 through a coming enhancement.

Much of this Graph goodness can also now be accessed through the new Office 365 Universal API:

tyGraph

tyGraph is our product that provides advanced analytics for Yammer. It had something of a coming out party at Ignite, and while we didn’t have a booth or any launch sessions, we were fortunate enough to have several folks, customers and thought leaders present talks that at least in part featured tyGraph. If you’re interested in analytics for your Yammer network, I recommend that you watch some or all of these sessions:

Enterprise Social, from “Ooh, Shiny” to Business Success – Melanie Hohertz, Cargill

The Microsoft Enterprise Social Journey: How We Did It – Chris Slemp, Microsoft

Gain Organizational Insights with Yammer Data Mining and Analytics – Steve Nguyen, Microsoft and Tammy Young Heck, EY

Yammer Mining: Dig in and “Listen” to What Your Big *Social* Data Is Saying – Richard diZerega, Microsoft

4 Comments

PerformancePoint Lives!

During the Microsoft Ignite conference in Chicago, Bill Baer delivered the first public details on the inner workings of SharePoint 2016. He was discussing how non-Office 365 services were implemented in SharePoint 2016, and made the following statement:

For capabilities such as PerformancePoint Services, we have brought those services forward into SharePoint 2016 by back porting them into the product itself.”

This is quite significant as it represents the first time that any statement has been made about PerformancePoint by someone at Microsoft in quite some time. There has been a fair bit of speculation as to whether the product would even be included in SharePoint 2016, for example here, here, and here. This statement should clear up any confusion for those heavily invested in PerformancePoint as to whether they will be supported into the new release. You can see it for yourself below, and if you can, I recommend watching the entire session – Bill did a great job. The PerformancePoint statement occurs at about 9:50.

However, just because it will be there, doesn’t mean that it has a bright shiny future. I suspect that it is being included for compatibility reasons only so that those with PerformancePoint can move forward to SharePoint 2016. In terms of new features, I believe that PerformancePoint, much like InfoPath is a dead end. Time will of course tell.

4 Comments