Category Archives: Business Intelligence

Dynamic Sorting Using Parameters in SSRS

The other day, someone requested that a report in SSRS be sorted differently by default. While that might make sense if everyone wants it that way, more than likely you might have people that want a report sorted differently by default. How to do it?

There are probably a few ways, but this is what I did.

First, I added two parameters. “SortByDefault” and “SortOrder”

The “SortByDefault” will be a drop down of your columns you want to sort by for your dataset (or group, or table/tablix)

The “SortOrder” is simply Asc (1 to N, A to Z) and Desc (N to 1, Z to A)

Now, here is how mine look:

SortByDefault (I have two columns I want to allow sorting by, PointsLeft and StackRank):

SortOrder:

Now comes the fun stuff: Making it work.

Make sure you remove any “ORDER BY” in your dataset (you don’t have to but this makes it easier).

I also have every column in the report set up for interactive sorting based on the column header/column it shows, but not sure that is necessary here, I just wanted to put that out there just in case

You want to get to your sorting options. So in my case, I have a tablix, so get to your tablix property window and the sorting option:

Now you can see, my “Sort By” and “Then By” are expressions. It is kind of weird here. Also you can’t set expressions for “Asc” or “Desc” so what I had to do was trick it somewhat.

the first is to handle the asc option:

=IIF(Parameters!SortOrder.Value=”Asc”,Fields(Parameters!SortByDefault.Value).Value,0)

the second is to handle the desc option

=IIF(Parameters!SortOrder.Value=”Desc”,Fields(Parameters!SortByDefault.Value).Value,0)

You can see, some magic. If the order by is XYZ then use the field, otherwise 0. If you notice from the screenshot, first one is A to Z (Asc) and the second one is Z to A (Desc). So we are basically telling SSRS to sort by the param or not based on the order by option and it chooses the right order by (ASC/DESC). I think this was easier in SQL 2000 SSRS :)

Well now you should be able to test your report and try to sort orders. What I did next is make my params hidden. The defaults are what I wanted for the existing report (Order By PointsLeft DESC), and what I did next is create a linked report and set the hidden parameters int he report options in Report Manager to (Order By Stack Rack, ASC)

Now I have one report, with hidden sorting params and I can create linked reports with different sort options without having to create a new report. I could add all columns to the choices, or even let users choose as parameters (but they already have interactive sorting in this case).

Happy Report Buildin’!

SQL Server 2012: Data Quality Services

With the release of SQL Server 2012, I am looking more into Master Data Services (MDS) and Data Quality Services (DQS). A brief overview of DQS.

You install DQS with SQL, and you have to configure it. The server configuration is a cmd line process that runs to create some databases on your server (DQS_MAIN, DQS_PROJECTS, DQS_STAGING_DATA).

I ran into one issue with the running of the configuration, not sure if this happens everywhere, I am running Windows 8, but nonetheless, I ran into. After running the tool and getting error after error, and trying as admin, etc. I dug deeper into the error message and found that I there were some security/permission issues I had to resolve. It ended up being that I had to change permissions on

C:WindowsMicrosoft.NETFramework64v4.0.30319Configmachine.config

to allow write access. Once I did that, the configuration tool worked and I could get into DQS.

DQS gives you a “Data Quality Services Client” to work with. When you open it, connect to the database where you configured the three databases I talk about above. Once you, you have 3 panes.

You have Knowledge Bases, Data Quality Projects, and Administration.

Knowledge Bases: datasets of known data that you can use in your Data Quality Projects. You get a default Knowledge Base – state names and some other data similar to that.

Data Quality Projects: Here is where the magic happens. You can choose some source data (Excel xls – xlsx wouldn’t work or SQL table) and then apply your knowledge base on it. Then you can reimport your data at the end back into SQL or export it, and update your Knowledge Base with learned values.

Administration: Not a ton of options, but you can set some thresholds, and also setup your Azure data market settings.

Azure Data Market https://datamarket.azure.com/browse/Data?Category=dqs – Lots of data you can use to combine with your Knowledge Bases. Much more here and I won’t go into detail – it could be its own post in itself.

As a test, I took an excel file, added a few records with columns first, last, city, state (I actually imported into a staging SQL table to work with it) – But in the state field I put different variations of state, WI, Wis, Wisconsin, MN, Minn, Minn., etc.

I then ran the file through creating a new data quality project and ran it against the default Knowledge Base, and it corrected the values it could. Got a weird error clicking next on the project, it seems the button is touchy. Hopefully they come out with a fix soon.

Once you build up and get your Knowledge Base stable, you can use from SSIS packages or in Master Data Services. I see many useful applications for DQS. Either around your corporate data or pulling in data from Azure data market to cleanse existing data you might have (think: looking up gender from first/last name).

This post is a brief look at DQS and how it works, but there is so much more. I hope to get more in depth in the near future.

Yamanalysis: Analyzing Yammer and Using PowerPivot on MySQL

I have blogged before about we use Yammer. Some interesting data can be gleaned from the usage of Yammer. One thing though is that the data and usage stats are limited in the Yammer area, but you can get all the data and take a look at things. I ran into Yamanalysis and decided to give it a try.

After getting Ruby, Rails, MySQL, curl/curb, GraphViz, IBM WordCloud and whatever else configured, I finally got it working. (FYI – MySQL 5.0 – you need to run the config wizard as administrator on Windows 7 or it just hangs at the end).

Pretty cool data and analysis from a higher level. Of course after getting everything working, I wanted to hit the data with PowerPivot. This sounds like an easy feat, but yet seemed to be a complicated task.

I first got the ODBC connector 5.1 for MySQL (Since PowerPivot doesn’t natively connect to MySQL,and 5.1 since that is the only one I could find reliably and get to work.), and set up an ODBC source. Tests fine.

In PowerPivot, I would run through the wizard and it would get architecture mismatches, and catastrophic failures, trying to test the connection. Ignoring that and moving forward, running a query would just hang on import forever. I tried different DSN’s, User/System DSNs, etc, to no avail.

What I ended up doing was firing up my local Microsoft SQL instance, and creating a linked server through a system DSN to the MySQL instance, then I could query the data fine from SQL. I opened up PowerPivot, connected to SQL local and then ran the query to MySQL and it work. What a workaround, what a hack, but at least I can hit the data in PowerPivot locally, which was my goal here.

Of course I could take what Yamanalysis is doing and dump to SQL, or do something similar in C# and dump to SQL, that might be a project for another day.

Emergency Broadcast: Cursors in SQL are Bad.

Emergency Broadcast:

Last week, I stumbled across a “legacy” system. Small system, few tables, few stored procs, few web pages that let users manage it. The main table had about 40k rows in it, nothing huge. This system gets used sparingly.

Reports of the main page of the system taking 5+ minutes to load, or just timing out. Really? Dug through the code and lo and behold. A cursor.

Looping through 500+ items, and running 8 or so queries in each iteration. The queries were very similar:

SELECT some stuff INTO Cursor

WHILE LOOPING THROUGH CURSOR

SELECT STUFF 1

SELECT STUFF 2

..

..

END

I ended up re-writing the query to just grab all the data and did some case statement in the SELECT. They query went from 5+ minutes to less than 1 second.

Yes, CURSORs are bad. Avoid them. Rewrite your SQL. It amazes me that with all the good info out there on the web around SQL that people still write CURSORs in their queries. I asked the DBA to look at all procs on all systems and find all the CURSORs so we can root them out once and for all.

End of Emergency Broadcast

Selling Management on SQL 2012

2012 is going to be a big year in the SQL world. No, the world isn’t going to end. SQL 2012 should get released by Microsoft, hopefully in the first half (cross your fingers for the first quarter!) of the year. Great! But many out there are now on SQL 2005, or 2008, or 2008 R2, some even on SQL 2000 (SP4 – still get support?) but you want to get to SQL 2012. What can you do to make that transition easier? You need to sell the features and benefits, just like anything else.

Clustering

If you have any kind of clustering environment, or mirroring, or are even thinking about doing clustering, then SQL 2012 is going to be what you want to do. With AlwaysOn, it makes it dead simple to create and manage clusters. If you look back over the versions of SQL, and think clustering, you might shutter. With 2012, things become much easier and management has to see this benefit, as with anything, to make your systems more available with the new AlwaysOn

Master Data Services and Data Quality Services

Microsoft came out with their first round of Master Data Services (MDS) in SQL 2008 R2, but it was lackluster. The interface is clunky, weird, and hard to use. Most “end users” of MDS aren’t going to be that technical. You need something simple, like SharePoint, or Excel. MDS is neither (even though its a weird version of SharePoint). With 2012, MDS is vastly improved and actually something viable where an Enterprise could use it for a Master Data Management (MDM) solution. Couple that with Data Quality Services (DQS) and you get tons of bang for your buck. with MDS and the excel add on, this will be just what the doctor ordered for MDM groups in businesses.

Business Intelligence

Near and dear to my heart of course, is Business Intelligence. What a huge release for BI folks in 2012. First off, a whole new analysis services type, Tabular. Columnar Vertipaq type cubes. Reverse engineer PowerPivots right into SSAS Tabular and then tweak to release out to the Enterprise.

Then the enhanced SSIS stuff, better IDE, better management of packages, and more. Of course the integration with the Visual Studio 2010 IDE is a welcome feature, especially for those of us that also need to work on C# and .NET 4.0 stuff!

But don’t forget the potential biggest thing yet out of the BI tools for 2012 – Power View (yes the space is intentional, not sure why.. but now we have PowerPoint, PowerPivot and Power View). Naming aside, Power View could be a HUGE analytics tool to get more BI out to the people in an Enterprise. First off, they plan on making it work on iOS! Power View works on tabular cubes, so you see the tie in there. The one big thing with Power View, is it just works inside of SharePoint. No stand alone editor. You better have SharePoint 2010 and a pretty good SharePoint admin along side your BI team to get all this stuff working. Some of the enhanced end user alerting in SSRS integrated mode looks nice as well. But once again, you need SharePoint! DON’T for get the SharePoint!

There is much more in SQL 2012 that will make DBA’s lives easier, and BI pros development streamlined. Too much to outline in just one post. But if you are trying to sell SQL 2012 upgrade to management, the “big three” things I outlined above are a good starting point. One thing to be aware of though is that the licensing model has changed in SQL 2012 to core based, so you would want to read up on that.

I’m excited for SQL 2012 bits to hit and I hope you are too!

#sqlpass 2011 Summit Thoughts

Want to get a quick post out before I forget everything cool I have been thinking about the 2011 SQL Pass Summit in Seattle WA.

The conference started out great with a first timer reception (this was my second year), and I knew a few first timers so that would great.

The first day keynote was good, and then the sessions I hit were on a full range of things, new semantic search stuff with file table, and more.

I missed a couple of sessions on day 1 because I was in a Microsoft led focus group on “BI in the cloud” some very cool ideas thrown out and excited to see what is coming up in the future.

More good keynote on day two, a deeper dive into the stuff in SQL Server 2012 (formerly Denali). Went to some good sessions on Data Quality Services (DQS) and Vertipaq vs OLAP, and Power View (remember the space!) SharePoint and Power View integration, etc. Some very cool stuff coming soon with the 2012 release.

Tabular cubes vs Multidimensional cubes. OLAP vs Vertipaq, etc. Big debates and questions here, when to put things in Tabular vs MDX, etc. DAX solves some hard problems we run into with MDX.

Another thing I went to a couple sessions on was StreamInsight. This is some very cool technology for complex event processing, using .NET and LINQ. The StreamInsight guys are crazy smart. Also integrating StreamInsight with Azure, and doing some crazy processing and analysis of tons of data in near real time. I see this technology making a difference somewhere down the road.

Power View was another big winner from the summit. This integrates with SharePoint 2010 as a Silverlight (SL still isn’t dead?) report builder and viewer. Not even sure they are called “reports” anymore, but “views”. You create views and you can interact/edit them in real time. Export to PowerPoint too. These views work off “tabular” vertipaq BISM (BI semantic model) cubes. This technology is going to change how we deliver information to the business.

Being able to take a PowerPivot (no space!) and reverse engineer it into SQL Server Data Tools (formerly BIDS – in VS2010 shell) to create a BISM tabular cube and publish to SSAS Tabular instance looks great. Solving problems like crazy snapshotting and many to many and time analysis using DAX looks very useful.

Of course, GameWorks appreciation night was fun, and I found a Mcmenamins (Six Arms) up the road from the convention center, so I got get a taste of Ruby again. Good conversation and times at the Taphouse and around the Sheraton and Convention center was good too. Seattle is a nice city and the weather was pretty good I thought. Another great summit.

SSRS Report – No Page Breaks For You!

One thing I usually run into when creating SQL Server Reporting Services (SSRS) Reports is this: You have a smallish size dataset back, maybe somewhere between 50 and 150 rows, but if it hits that row limit on the page break, you get 3 records on the next page. Annoying. What I usually do for reports like this is just make them all one big page. How? Pretty simple actually. Set the “Interactive Height” to 0 on the report.

Open up Report Builder (or BIDS) and get to your “Report” Properties, and then under “Interactive Size” set the height property to zero. Thats it.

Once you make this minor change, you report will just list your row with no page breaks. Pretty useful for that report with 3 more rows than a page!