Showing posts with label SSAS. Show all posts
Showing posts with label SSAS. Show all posts

Monday, December 31, 2018

Adding Date Period to SSAS Tabular Models


Using time intelligence is one of the great features of using SSAS.  I often find that it’s useful to use data periods with dashboard objects instead of simple data ranges. For example, if you want to filter the data by the last 7 days, you often have to specific the current date and the last seven days. This would be try as well for current month. To simplify this process, I would suggest creating a date period for common date selections. For the date period as follows...


  • Current Month
  • Last Two Months
  • Last Three Months
  • Last Six Months
  • All Periods
In this example, the date column is Fiscal Week (Typically this would be Date Column) and the Date is specific as FiscalDate. The ‘Fiscal Week’[FiscalDate] columns would need to be change to whatever is correct in your SSAS model (i.e ‘Date’[Date]’).

You would first simply create a Calculated Table in your Tabular Model with the following DAX Query: 

= UNION (

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate], EOMONTH(TODAY(),-1)+1,TODAY())  ), 'Fiscal Week'[FiscalDate]),"Period","Current Month") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate], EOMONTH(TODAY(),-2)+1 ,TODAY())  ), 'Fiscal Week'[FiscalDate]),"Period","Last Two Months") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate], EOMONTH(TODAY(),-3)+1 ,TODAY())  ), 'Fiscal Week'[FiscalDate]),"Period","Last Three Months") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate], EOMONTH(TODAY(),-6)+1 ,TODAY())  ), 'Fiscal Week'[FiscalDate]),"Period","Last Six Months") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week'), 'Fiscal Week'[FiscalDate]),"Period","Overall") )


 In this example, we could use data periods by days as well…

  • Last 7 Days
  • Last 14 days
  • Last 30 days
  • Last 90 days
  • All Periods 
= UNION (

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate],today()-07+1,today()) ), 'Fiscal Week'[FiscalDate]),"Period","Last 07 Days") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate],today()-14+1,today()) ), 'Fiscal Week'[FiscalDate]),"Period","Last 14 Days") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate],today()-30+1,today()) ), 'Fiscal Week'[FiscalDate]),"Period","Last 30 Days") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week' , DATESBETWEEN('Fiscal Week'[FiscalDate],today()-90+1,today()) ), 'Fiscal Week'[FiscalDate]),"Period","Last 90 Days") ,

ADDCOLUMNS( SUMMARIZE( CALCULATETABLE('Fiscal Week'), 'Fiscal Week'[FiscalDate]),"Period","Overall") )

 Once you’ve created the Calculated Table, you must add a relationship to the Date Table in your model. In this example, we create the relationship with the Date Period calculated table in the a Table called Fiscal Week. We simple use the default Cardinality: One to Many for the Filter Direction, we use Both Tables.

  


Select OK to save and after deploying the model you should be able to filter by periods.

Friday, January 29, 2016

YTD and Previous YTD calculations

Regardless of what reasons you develop a OLAP DB, you almost always are going to have to deal with creating a YTD and/or previous YTD Calculation. I don't have to do these calculations a lot, which means I need to often scramble for the proper code and syntax to make this work. Having just recently did this for SSAS 2013, Here is the process for you and I.


For starters, make sure you have created the proper date dimensions with at the minimum (for best effect) a date and year column. Yes, this seems obviously but still it needs to be said. Now that we have that out of the way, the steps are as follows:
  1. Add Calendar hierarchy (Works best this way)
  2. Make sure Date Dimension is specified as Time in Dimension Properties
  3. Add Attribute Relationships (see my previous blog post on this)
Now to create YTD use the following (I know there is a YTD function but I have more success with this method) MDX calculation:

 AGGREGATE
(
 PERIODSTODATE
  (
    [Dim Date].[Calendar].[Year],
   [Dim Date].[Calendar].CurrentMember
  ),
([Measures].[Amount])
)

In my example I have a Date hierarchy called DimDate.  In the above code, I used  called this Measure YTD, not using that same calculation, I added it to a ParalellPeriod function to get the Previous year. The code is as follows:

SUM
(
ParallelPeriod
([Dim Date].[Calendar].[Year],
1,
[Dim Date].[Calendar].CurrentMember),[Measures].[YTD]

)


This should work as long as you have process the YTD measure and then created the Previous YTD calculation. Also, this requires the Measure to be aligned with an Year filter (you will see NULLs otherwise) to work.


Again, a common calculation but commonly done incorrectly. Hopefully this helps you.


Friday, March 6, 2015

Creating PowerPivot data sources


As I have demonstrated there are a lot of things about PowerBI to get excited about. One of these is Microsoft’s adaption of the technology xVelocity. The true power behind PowerPivot is the xVelocity in-memory analytics engine.
Why XVelocity is important

Before Microsoft’s introduction of xVelocity most of the report offerings was via SSRS, ASP.NET, and Performance Point Services. SSRS an ASP.Net uses a lightweight development model. These visualization tools relied on data sources like SQL Server that traditionally was stored on hard disks.  Hard disks are relatively slow. Whenever system must use resources stored on disk, this will impact the response time of the report or dashboard. This is why Microsoft now uses xVelocity.  xVelocity gives you two very important things, faster dataset creation (via PowerPivot) and the faster response from visualization tools that rely on these datasets.
PowerPivot as a data source
The power to create PowerPivot data sources cannot be understated. With this technology you can create a centralized data source that encompasses various data sources. You can create a PowerPivot data source that is comprised of text files, Excel worksheets, and database tables into one data source. In my example, I’m going to demonstrate how to do this with SQL Server. PowerPivot data source differ from others because its dataset is stored entirely in memory. Again, this is due to the xVelocity technology.

PowerPivot is essentially the database “Tabular” model

Tabular model is a database design much like OLAP Cubes. However, Tabular cannot totally replace MDX cubes. The main reason is due to the memory requirements for tabular. Tabular models must be stored in memory while MDX can be stored on physical disks, essentially allowing an extremely large data sources. Unfortunately, the MDX language is more difficult to use and the OLAP cube development can be much more frustrating.  DAX, which is the expression language of Tabular/PowerPivot data sources, is easier to use and understand. Excel users will find DAX easy to adapt because of its familiarity to Excel expressions.

Tuesday, October 14, 2014

Tabular Processing Failures


Tabular cube development is great. But like any other part of MSBI, you 'll run into issues from time to time. Recently I generated the following error while attempting to process a tabular cube....
OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.
A connection could not be made to the data source with the DataSourceID of '8e03aac2-05c6-4712-87d4-af620a257aeb', Name of 'SqlServer <Server and/ Connection Name>.
An error occurred while processing the partition 'dimXXXXX_74d208d3-9774-4094-83c1-a7e7615bfcc0' in table 'dimXXXXX_74d208d3-9774-4094-83c1-a7e7615bfcc0'.
The current operation was cancelled because another operation in the transaction failed.

This occurred shortly after making subsequent updates to the cube across several weeks. My initial thought was that something was corrupted with the project files since it was all working just fine a week ago.
To resolve this issue,  I had to first remove any Connections that were not being used. This included two of the five connections I created in the project. You can remove the connection string if there are no tables tied to the connection. To determine if there are tables connected to the connection, you could simply try to delete it.  To do this, try the following…
  1. In the Menu, Select Model
  2. Select Existing Connections…
  3. In the Select Existing Connection Dialog box, Attempt to delete a connection…The Tabular Model Designer will give you can error indicating One or more tables are still associated with the data source that is being deleted. Remove the associated tables before deleting the data source: ….” It will list the associated tables.
  4. If the Connection is allowed to be deleted, it is safe because that connection is not tied to any tables imported
One that was done, I then changed all the connection strings from using the SQL Server Native Client 11.0 (which is fine as long as the SQL Server is on the same machine as the SSIS development / package environment), but to the Microsoft OLE DB Provider for SQL Server which is a bit more flexible (for local and remote servers).
Both of these together resolved my issue.
But the most important thing is how did I get to have some many connections?
 Well this was caused by improperly adding the new tables. When I would add new tables to the existing tabular cube, I would mistakenly start by selecting “Import from Data Source” instead of correctly using  “Existing Connections”. When you specify Import from Data Source  you will then be forced to add a new Connection which will increase the list of Connections. If you do this enough, you have multiple connection strings that are all the same other than the connection name. This gets really annoying when you process the cube and are prompt for a password for EACH connection. This also potentially introduces annoying issues this discussed earlier.

 

Sunday, June 1, 2014

Sorting Dimensions in SSAS

One of the most common tasks that is relatively easy to do in all Microsoft data applications is to sort data. Unfortunately, this is not true when developing cubes in SSAS. You’ll think there would be a simple “sort” option or something where you right-click and sort by XYZ. Nope! Instead, you have to configure sorting using several steps.  

Date dimension data will likely be the most commonly needed type of data to sort. Sorting won’t be an issue for numeric data, but it will be for the alphanumeric data like month names. For example, when I browse my Data dimensions for that represent month names, I get the following months names in this specific order: April, August, December, February….You get the idea, it would be in alphabetical instead of chronological order.. This is of course is useless in the real word.


To fix this we need to specify that the Month Name be sorted by the Month number.  When you design your data source view, it is important that when you bring in a column that needs to be ordered outside of the natural alphabetic or numeric order, then that column needs an associated column just for ordering. In the case we have Month name (that cannot be ordered alphabetically) and Month number (that we’ll use to sort Month name). Here is the process to sort by Month Name by Month number.

1.      Select the Month (or whatever represents the column used for sorting) attribute in the Dimension Structure and Select Properties.
      2.      Change the following properties for the Month Name. 

a.      AttributeHierarchyEnabled property to False  
                    b.      AttributeHierarchyOptimizedState property to NotOptimized
                    c.      AttributeHierarchyOrdered property to False
 



 

3.      We then need to establish Attribute Relationships. In the diagram, right-click the Month Name attribute and then select New Attribute Relationship.
       4.      In the Create Attribute Relationship dialog box, the Source Attribute is Month Name. Set   the Related Attribute to Month.
5.      Since we expect this type of relationship to not change, in the Relationship type list, set the relationship type to Rigid. Select OK to apply changes.

 



6.      Go back to the Dimension Structure tab. Select the Month Name in Attributes. Change the Orderby property to Attribute Key and change the property OrderByAttribute to Month.

You can now reprocess the Date dimension and apply the changes. If now go back into the Dimension browser we can see that the Month names appear in the correct order.
 
We can use this method to sort anything that has an associated sort column. Any item dimension that must be order in some way other than alphabetically, we have to use this type of method.

Friday, May 30, 2014

SSAS cube dimension processing failure.

I was just developing a SSAS cube with relatively no problems. Once I began to process the dimensions , the Cube processing failed and returned the following errors...

OLE DB error: OLE DB or ODBC error: The SELECT permission was denied on the object 'dimTask', database 'Project', schema 'dbo'.; 42000.

Impersonation Information Tab
The first thing I checked was the Impersonation Information tab on the Data Source.
Secondly, I then looked in SSMS and needed to verify that this account had the appropriate SQL Security role to process data from the Project DB (or whatever database). To resolve this problem, I opened up SQL Server Management Studio (SSMS), Under Security, Select the specified account (SSASProc in this scenario), Select the Server Roles in the Login Properties page. I then gave this account sysadmin server role.
 
SQL Server Management Studio

 This solved my SSAS Cube processing problem. I hope this works for you.