Tag Archives: SSAS

Excel 2007, OLAP Cubes: Customizable, User Defined Named Sets in Excel 2007 using VBA

In a perfect world, your master data and master data management (MDM) is set up so everyone can see things how they want. Categories of Items, Regions, etc, etc are all defined in your dimension data, and you can create hierarchies, etc that make sense and everyone is happy. But, this is not a perfect world :)

Some users want to see “their” items, “their” regions, etc. And they ask and ask for you to add it to the cube, but you have to deny them. Why? Because if you add their named set, you have to add all of them that are requested, that don’t make sense to 99% of the other people using the cube.

You probably could even go about doing some crazy MDX or something in your cube to create the named sets per user, but then you still have to manage it all in your MDX script in your SSAS project.

I blogged a while ago about the OLAP Pivot Table Extensions on CodePlex , which, is an awesome toolkit, but it is geared around Calculated Measures. I downloaded the code and took a gander here and there and it would be pretty easily modified to work with Named Sets (change xlCalculatedMember to xlCalculatedSet , add a radio button on adding a new calc, etc) – but, its in VS2005, and I have VS2008 (2005 BIDS, not C#), and right now I don’t feel like mucking around with that, maybe someone with more time, and more ambition can do it :) – Or maybe when I get some free time I will take a look, but for now here is a solution.

I took the idea outlined at the bottom half of this blog http://blogs.msdn.com/excel/archive/2008/02/05/common-questions-around-excel-2007-OLAP-PivotTables.aspx  about named sets and creating them in macros..

Sub AddNamedSet()

Dim pvt As PivotTable
Dim strName As String
Dim strFormula As String
Dim cbf As CubeField

Set pvt = Sheet1.PivotTables("PivotTable1")
strName = "[My Mountain Bikes]"
strFormula = "[Product].[Product Categories].[Bikes].[Mountain Bikes].children"
pvt.CalculatedMembers.Add Name:=strName, Formula:=strFormula, Type:=xlCalculatedSet
Set cbf = pvt.CubeFields.AddSet(Name:="[My Mountain Bikes]", Caption:="Mountain Bikes")

End Sub

That macro is pretty sweet, you can add a named set to your workbook. But, its pretty static. You have to edit the macro every time you want to add more items, it doesn’t update or remove the sets you might have, etc.

What I whipped up quick was an updated macro that lets you build customizable named sets based on data in another worksheet in your Excel file (you can imaging it coming from other sources – another Excel file, a SQL table, etc, etc). I did this with a cube we use at work, but for this example, I changed it to pull off the AdventureWorks DW SSAS DB, AdventureWorks Cube, (SQL 2005 edition)

First off, here is the macro: 

Text File of Macro: Custom_NamedSets_Macro.txt</a

Excel 2007 Workbook with Macro: Custom_NamedSets.xlsm

Sub AddNamedSet()

    ' data sheet top row must be set name [Set Name] and then item numbers after
    Dim sourceData As Worksheet
    Set sourceData = Worksheets("Data")
    
    Dim strName As String
    ' get the name of the set, including brackets []
    strName = sourceData.Range("A1").Value
    
    ' used range is the whole column, rangeVals is the item numbers
    Dim usedRange As Range
    Dim rangeVals As Range
    Dim maxRow As Integer
    
    Set usedRange = sourceData.usedRange
    
    maxRow = usedRange.Rows.Count
    
    ' get the item numbers
    Set rangeVals = sourceData.Range("A2:A" & maxRow)
    
    Dim strFormula As String
    Dim i As Integer
    
    ' loop through and build forumla
    strFormula = "'{"
    
    For i = 1 To rangeVals.Count
       strFormula = strFormula & rangeVals(i, 1) & ","
    Next i
    
    ' remove last comma and add last curly and tick
    strFormula = Left(strFormula, Len(strFormula) - 1) & "}'"

    ' get pivot table object
    Dim pvt As PivotTable
    Set pvt = Sheet1.PivotTables(1)
    
    ' Add a calculated member titled "[MySet]"
    pvt.CalculatedMembers.Add Name:=strName, Formula:=strFormula, Type:=xlCalculatedSet
    
    ' Add a set to the CubeField object.
    Dim cbf As CubeField
    Set cbf = pvt.CubeFields.AddSet(Name:=strName, Caption:=Replace(Replace(strName, "[", ""), "]", ""))


End Sub

Sub DeleteNamedSets()

    Dim pvt As PivotTable
    Set pvt = Sheet1.PivotTables(1)
    
    Dim i As Integer
    
    For i = 1 To pvt.CalculatedMembers.Count
        pvt.CalculatedMembers.Item(i).Delete
    Next i
    pvt.RefreshTable
    
End Sub

Now, this was a 15-20 minute VBA hack. It could really use some cleanup, but works. What I did to make it work with Advendture Works was just put the whole “member” string in the data tab of the spreadsheet. In my case at work, I just was using item number and had some of the member string in the macro. But really you can see you could add a named set on each column, or something, you really could make this powerful. And since I added the delete, if they re-run the add, it will recreate the set, so they can modify the data and re-add. It should be expanded on and made a little more robust, but you get the idea. Now, think of how you could use this with your cubes, and how you could get your user’s thinking about ways to use custom named sets!

SQL Job – Check Cube Valid Data as Last Step

Running a SQL Agent job to do an ETL/Cube Processing, you might also want to check the status of the cube after you process it, just to make sure.

Create a job step that is a T-SQL type, and

image

DECLARE @forecast VARCHAR(10)

    SELECT  @forecast = CAST("[Measures].[Forecast-Part]" AS VARCHAR(10))
        FROM
    OPENROWSET(‘MSOLAP’, ‘Data Source=localhost;Initial Catalog=ComponentForecast;’,
        ‘SELECT { [Measures].[Forecast-Part] }  ON COLUMNS FROM [ComponentForecast]‘)

IF @forecast = ‘0’ OR @forecast IS NULL
RAISERROR (‘Cube Data Not Loaded Correctly’, 17, 1)

 

Of course your MDX query in the OPENROWSET will need to be different depending on your cube. If you get more complicated, you can also just call a stored procedure and let your imagination run wild with what you can do.

* update – fixed sql code – changed from BIGINT to VARCHAR(10)

Microsoft Business Intelligence Development in a Team Environment

Today I received an email asking to some extent best practices on development with SQL Server Integration Studio (SSIS) and Business Intelligence Developer Studio (BIDS) in a team environment. Here is part of the email:

Me and another DBA belong to the same team, we have a SQL server with SSIS running. We use the SSIS transfer data among multiple data sources. In SQL 2000 DTS, both of us can save the package on the server and open/edit it in the enterprise manager. In SQL 2005, I can see the package on server, but can’t open it directly. We came out a solution: create a shared folder on the server called ‘SSIS Projects’, both of us can access to it. We run the ‘SQL Server Business Intelligence Development Studio’ on local PC, to open the project in that shared folder. When done with the change, save the package to the SSIS server. Now, we have more than 50 packages in a project. Problem is: it’s very slow when open a project, ‘Business Intelligence Development Studio’ tends to open/verify every single package inside a project, takes up to 10 mins and getting worse. We really miss the SQL 2000 DTS, but we have to turn to SQL 2005.

  1. Are we doing the right thing? Is there any better solution for SSIS developing in a team environment?

  2. When open a project, does ‘Business Intelligence Development Studio’ has to open/verify every package?

 

This got me thinking, and I figured instead of write an email back, it would be good info for a blog post. So here is what I think and some things I have done that have worked.

First, yes, SQL 2000 DTS allows you to just edit on the server, do more than SSIS, is just way better than SSIS. Wait, what? Well, yeah some people will say that, because it does one thing that might be a little rigmarole in SSIS, but no, SQL 2000 DTS is not better than SSIS, just wanted to clear that up.

So, the is meant to be a starting point, by no means all encompassing, and as always, YMMV.

One thing that I first thought about is this: Yeah, if BI devs and SQL devs have never really worked in a team environment, developing software, how would they know what to do, or best practices? They would just go about “making it work” until everything breaks or who know what.

 

So how to develop Microsoft Business Intelligence Solutions in a team environment?

 

1) Standardize on Versions

 

First, figure out what “versions” you are going to support, and what you are going to use, and get standardized on them. I am guessing majority of BI devs right now are on the 2005 stack. Yeah, there is still probably a bit of 2000 legacy stuff out there, and some people are now getting into the 2008 stuff, but 2005 is pretty much the norm from what I see, at least at this point.

So, 2005. Get all your dev’s on 2005 on their machine – same patch level, etc. Get BIDS up to the same level. Get BIDS helper installed everywhere. Strive to get all your ETL packages in SSIS 2005, get all your cubes to SSAS 2005, etc, etc. Come to a consensus on things like config files for SSIS, naming conventions, within your development and on disk – folder structure is key! With a smaller number of versions of things floating around, it makes it easy for anyone on the team to open up a solution and start hammering away without tons of setup.

2) Get Source Control

 

This is crucial! I have talked about source control in the past, and also about some that aren’t so great. Really it doesn’t matter what you use, I prefer SVN. I install Tortoise SVN, SVN proper (to do scripting etc if I need to using cmd line) and also purchase Visual SVN, an add on to Visual Studio that integrates with SVN. for 50 bucks you have your source control system. Visual Source Safe works but is outdated, honestly I hate it. Team Foundation Server is good, but expensive. Other solutions might be using something like GIT, etc. Whatever you do, just get a source control system going, and learn it well. Learn how to create repos, commit, update, revert, merge, etc. Set up a user for each BI dev and make sure they commit often, and make sure they leave comments in the source control log when they commit, history is your lifeline to go back to something if you need to! Note: exclude .suo, bin, obj directories, .user files, etc. Anything that changes every time you build, open, etc, you want to exclude from source control.

 

3) Development Box

 

You now have your version standardized, and your source control setup. You can get most of your work done on your machine, but you need somewhere to test deployments, run scenarios, etc, etc. Make sure you have a comparable box to your production server. Set it up the same, same software etc. Make sure its backed up. Let all the devs know its a dev box, it can be wiped at any time for any reason if need be. It can be rebooted 5 times a day if need be. Its a dev box! But you can test and develop and tweak and change settings to your hearts content and not have to worry about breaking Mr. Executives reports.

 

4) Developing, Merging, Committing, Collaborating, Communicating.

So now you have your setup, well.. setup. Start creating stuff. SSIS Packages, ETL’s, SSAS Cubes, SSRS Reports, the whole MSFT BI Solution. This is where stuff can start to get tricky in a team environment though. SSIS/SSAS/SSRS isn’t as clean cut as something like C#/VB.NET, etc. Everything is in some form of XML behind the scenes, and with graphical based editing, you can move stuff around and it changes the files. Things like that are going to be your enemy. This is why you need to collaborate and communicate. Usually one person should be working on one project at a time. You can get really good at communicating and then in SSIS at least have multiple people working on different packages. Also in SSAS dimension editing and stuff can be done by multiple people at the same time as long as the dim is already hooked up to the cube. But you want to make sure that you communicate, “Hey, I am checking this in, you might want to do an update”, or “Is anyone working on this or are they going to? I want to modify something, and I will check it in so you all can see it”

You want to make sure you have your folder structure, and solution/project structure set up well. C:Projects  ..  and then maybe a folder for each major project “CompanySales” and under that, “ETL”, “Cube”, “Reports” and have a solution under each with 1 project of each type. You can also have a generic SSRS solution with many projects, which might work well for you. In any case, just come up with a standard and stick to it. Trust me it will make your life easier. The question from the email above, it sounds like they have every package in one solution, one project. Sounds like it needs to be split, multiple solutions, multiple projects.

 

5) Deployment Scenarios and Strategies

Now that you have everything developed, tested, checked in, what do you do?

Personally for SSIS I like xcopy deployments. One folder on the server, not on the C drive, but another drive, lets say “E:SSIS” under that a folder for each project. Put your dtsx and configs in the same folder. 99% of the time you are going to call the dtsx from a SQL Agent Job, and most likely you are going to run into a scenario where you need uber rights to execute it, so learn how to create a proxy/credential in SQL security so you can run the step as that. Once you have this folder and subfolders setup, you can use something like Beyond Compare to compare the folder on the server to the one you have locally that matches. Remember to copy files from the bin directory of your project after you build it, not the files directly on your project. As far as BIDS validating every package, there are workarounds out there you can do, here is one.

For SSAS, I try to lean towards using the Deployment Wizard that comes with SQL Server. You can use BIDS deployment, but if you start doing anything advanced with roles, partitions, etc, you are going to run into trouble. Take control and use the deployment wizard. I usually like to deploy, and then process manually when developing. And then later use SQL Agent or and SSIS package to actually do processing when it comes to a scheduled processing.

SSRS, I have become used to the auto deployment from Visual Studio. To really do this though, you need a project for every folder in SSRS, which can become a pain. You can always just upload the .RDL file and connection and do it manually, but if you start off right with using the deployment from BIDS, it can make your life easier.

 

So that is just a 10 minute overview of everything to kind of get started. Everything depends on your infrastructure and the way your team is setup, etc. But I think the biggest thing to take from all of the above is to standardize on things. If you standardize on as much as possible, SQL versions, setup of machines, naming conventions, layouts, design patterns, etc, everyone can do things faster and pretty soon it will start running like a well oiled machine!

Using Offline OLAP to Develop Cube Reports Without SSAS

One feature of Excel 2007 that I think is really cool, and also a little hidden, is the “Offline OLAP” feature.

If you insert a pivot table connected to an OLAP cube into Excel 2007, and go to the PivotTable Options Ribbon Menu, You will see the “OLAP Tools” Button. Click on that and then “Offline OLAP”.

image

Once you go through the Offline OLAP Wizard, it will create a .cub file for you. What this ends up being is a local disconnected “cube” you can analyze in Excel, if you are on the plane, or in some remote area with no internet connection.

Other uses for the offline .cub files that I have found is this. Let’s say you want developers to develop Web based reports using .NET, maybe using the Dundas OLAP Services controls. If you don’t want to have to load SSAS or set them up to connect to any server so they can just develop locally, the .cub file is the way to go. In their .NET code, they can just change the connection string to point to the local .cub file, and then later when you are ready to go live, you can just change the connection string to the live cube. That way, if you are developing the cube at the same time reports are being developed, you don’t have to worry about uptime, etc. Just send them an updated .cub file every once in a while.

I don’t know much about the details for upcoming releases, like Microsoft Project Gemini, but I have a feeling that it might feel a bit like this, using Offline OLAP, or local analysis.

On a final note, if you really want to get geeky, you can actually create the .cub files from .NET, but that is another blog post :)

Business Intelligence != Reporting

Last week I was in a heated discussion about Business Intelligence, and it came up that “Business Intelligence is just writing reports, what is the difference from just writing some reports?”

I tend to disagree. Writing reports is, well, writing reports. You can write reports off of tons of data, and yeah, probably get some good info, but usually it ends up in disparate information.

I have blogged about this before, probably a few times, but this is usually what happens in organizations:

1) Some app or system is set up where data is created in some kind of data store, usually a database

2) People want to see the data in some type of report

3) Developers start cranking out reports right off the source system, stopping the bleeding for the current need for more reporting.

4) System keeps growing, more data is created/collected.

5) Reports off source system start slowing down, timing out, many different reports end up for the same thing and don’t match, end users complain.

6) Developers get clever, start moving and summarizing data on some kind of interval, all with custom code, custom design. They write some reports off the summary data to once again stop the bleeding. Some reports are still off the source data. Reports don’t match. End users complain

7) More data is collected, more users want reports, and want them with different views, grouped by this, sorted by that, add this column, remove this column, etc.

8) Developers are taking the heat. Their reports aren’t matching, they are running slow, etc. They can’t keep up with all the report requests. They decide to create a custom reporting engine on top of their summary data that allows end users to create their one reports using some quasi query language and drag and drop columns/grouping. This stops the bleeding for a while.

9) Finally someone in the organization realizes that the developers are reinventing the wheel. This is where Business Intelligence comes in. The source data is left untouched, the BI group creates and ETL to have some kind of data warehouse design and structure, using out of the box tools. (SQL Server Management Studio is and out of the box tool – you use that to write queries. SQL Server Integration/Analysis/Reporting services are out of the box tools, you use them to create ETL’s, Cubes, and Reports).

10) End users can now write reports using Excel off the OLAP Cube, or they use the Report Builder with Reporting Services to create reports off a UDM (Universal Data Model), or the BI devs create reports in Reporting Services, all off the same source data. One version of the truth, less custom reporting.

11) Developers are left to develop on the source system turning requirements into code/features, not focus on reports, and the BI group is focused on getting all the data turned into information.

Of course this is a perfect scenario described above. Nothing is ever as simple as that, but we can hope. Also I am biased towards the MSFT toolset, but there are other toolsets out there that would solve the same problem.

All I know is that I have seen everywhere I go pretty much the same thing happen from #1 to #8 above. In some cases you never see #9-#11 and you end up in a world of hurt.

In the end though, Business Intelligence just isn’t “writing reports”, far from it.


SSAS Cube Rule #1 – YOU NEED CALCULATE

Ok, I just spent a few hours debugging a cube that wasn’t bringing back any data. Checking all the relationships, the source data, the marts, etc, etc. And lo and behold, find out that a colleague commented out all the calculations, including the CALCULATE statement which says:

The CALCULATE command controls the aggregation of leaf cells in the cube.
If deleted or modified, the data within the cube will be affected.
This command should only be edited if you intend on manually specifying how the cube will be aggregated.

Remember, you NEED the “CALCULATE” command!