Category Archives: SQLServerPedia Syndication

PowerPivot, Excel Services, SharePoint 2010 Farm, and You

The last few days I have spent an exorbitant amount of time (5-6 hours?) getting PowerPivot and Excel Services working on a SharePoint 2010 Farm. I just want to get out there some of the things I had to do to get it all working, and why (at least what I think is why).

First off, most dev setups are using one box for SharePoint, which, in my eyes, masks most every issue you will run into.

Most prod environments are multiple boxes, Web Front Ends, App Servers, etc. This leads to the most pain when setting up these services (PowerPivot, Excel Services) as there are hundreds of different configurations and setups. You need to get it juuuust right.

Anyways, I will explain as best I can.

First, assuming you have PowerPivot running on an app server and Excel Services running to. You upload your PowerPivot workbook to a PowerPivot gallery and you go home happy. But wait, does the data refresh correctly? Can you even open it?

1. If your site is running https, you have to tweak the Excel Services settings.

By default, only http:// shows up here (I think?) so you need to add https:// or you can’t even really get into your PowerPivot.

image

2. You probably are going to have to up the upload size limit

Some PowerPivot workbooks are big, like 80-100 MB big. Default in SharePoint 2010 for Excel Services is something like 10 MB, and SharePoint default is 50 MB, you need to change for both settings.

3. If you don’t have Kerberos set up, it is tougher.

PowerPivot refreshes data from some system somewhere to its own “cube”, then Excel Services refreshes data from that cube into Excel Web. You have to setup a way for the data to get refreshed into PowerPivot’s cube. No Kerberos? Then you need to use the Secure Store Service and set up a credential, and set that up as the unattended service account for PowerPivot refresh. Then, at least you can get data from somewhere else into PowerPivot’s “cube”

Second step is getting Excel Services to refresh from that cube into Excel web. What I had to end up doing was creating another credential in the Secure Store service for Excel Services Refresh (set up as farm account for now, it has the stroke it needs). And then set that up in Excel Services settings as well for Excel Services unattended refresh account. But also! – you need to change your workbooks before you upload or whatever.

In your workbook, go to your connection, properties, and get to the authentication, and change to “none” instead of “windows authentication”, then your data will refresh from the PowerPivot cube to Excel in SharePoint.

4. Same thing goes for the PowerPivot Management Dashboards

The management dashboards are set to “windows authentication” so they wont work either, you need to change to “none”, in the Management Dashboard site, goto all site content, PowerPivot Management, <some guid> folder, and then 1033 (US English), edit the two workbooks to use “none” and save, and your management dashboard will actually work!

I’m sure there is a ton more I go delve into here, but this is the high level. As Rob (@powerpivotpro) would say – “make sure to click on the slicers!”

Agile: Creating an SSRS Burndown Chart Part 2

In the previous post in this series, Agile: Creating an SSRS Burndown Chart Part 1, I explained what data you would need to prepare to create an SSRS Burndown Chart (Sprint_Dates, Stories, Story_History). In this part of the series I will explain how to get a basic burndown report in SSRS.

First, fire up Report Builder 3.0 and create a new report (if the wizard pops up, just pick “Blank Report”). You need to add a Data Source to your report. In my example, I am just using a database on my localhost called Agile, so I connect to that and create a report Data Source.

image

 

We then need to add 3 Datasets to the report. (Burndown, Sprints, and CurrentSprint), and one parameter (Sprint) and we can then format our report.

 

Sprints (this will be a dropdown of Sprints for a user to choose from)

image

CurrentSprint (this will get the current sprint based on what day we view the report, default param for the Sprint parameter we will create)

image

 

For the Burndown, do the same thing, but since the query is so large, no screenshot, just the query:

;WITH DayHistory AS 
(
SELECT 
	 bd.[Date]
	,bd.PointsScheduled
	,bd.PointsLeft
	,bd.PointsScheduled - ((ROW_NUMBER() OVER (ORDER BY bd.[Date]) - 1) * (CAST(bd.PointsScheduled AS DECIMAL(15,6))/10.0)) AS 'Goal'
	,ROW_NUMBER() OVER (ORDER BY bd.[Date]) AS [DayNumber]
FROM (
	SELECT tot.Sprint,tot.LogDate AS [Date], 
		CASE WHEN SUM(tot.PointsScheduled) = 0 THEN (SELECT SUM(Points)
		FROM dbo.Stories st 
		WHERE Sprint = 'Sprint01') ELSE SUM(tot.PointsScheduled) END AS 'PointsScheduled', 
		SUM(tot.PointsLeft) AS 'PointsLeft'
	FROM (		
			-- Get History for the Current Sprint
			SELECT Sprint,LogDate,SUM(Points) AS 'PointsScheduled', SUM(PointsLeft) AS 'PointsLeft'
			 FROM 
				 dbo.Story_History st 
				WHERE Sprint = @Sprint
			GROUP BY Sprint,LogDate
			UNION
			-- Get the Current Day		
			SELECT	Sprint AS 'Sprint',CAST(GETDATE() AS DATE) AS 'LogDate',
				SUM(Points) AS 'PointsScheduled',
				SUM(PointsLeft) AS 'PointsLeft'
				FROM dbo.Stories
				WHERE Sprint = @Sprint
			GROUP BY Sprint
			UNION
			-- Get zero's for all days in sprint to round out our dataset
			SELECT 'Sprint01' AS 'Sprint',WorkDate,0,0 
			FROM dbo.Sprint_Dates
			WHERE Sprint = @Sprint
		) tot
	GROUP BY tot.Sprint,tot.LogDate			
) bd
)
SELECT
	 a.[Date]
	,ISNULL(b.PointsScheduled, a.PointsScheduled) AS [PointsScheduled]
	,ISNULL(b.PointsScheduled, a.[PointsLeft]) AS [PointsLeft]
	,ISNULL(b.PointsScheduled, a.[Goal]) AS [Goal]
FROM DayHistory a
	LEFT OUTER JOIN DayHistory b
		ON a.DayNumber = b.DayNumber - 1
			AND b.DayNumber = 2
ORDER BY Date

 

This query is where all the magic happens. First, you need to get your story point values for the days, from your history, and also from the current day, you also need to get all days for that sprint with zero’s so that your graph will have all days and not just days with burndown. The CTE around the main query calculates the burndown by day so you end up with 4 columns, Date, PointsScheduled, PointsLeft, Goal

Now that you have your Datasets, we need to create a parameter, and then the graph!

Create a new parameter called “Sprint”, and set up the available values. Remember the Dataset we created to get all the sprints? Here is where you use it, like this:

image

Next, we want to setup the default values. Remember the query to get the “Current Sprint” – that is used to set our default.

image

Once you have that all setup, it is time to build the graph!

We are really close to having a working report here, and check back for part 3 of the series to get the graph working correctly, and part 4 for beautification!

#sqlpass Summit General Thoughts

Was at #sqlpass last week. Great conference/trainings, and people. Hopefully I will go next year (I am planning on it!) General thoughts/overview..

1. Should have done pre/post cons
2. Drink more water (funny thing is, no water during refreshment breaks? just juice/coffee/soda)
3. Leave room in your luggage for schwag
4. Bring more business cards.
5. Introduce yourself to more people
6. Realize that the level #’s on the courses aren’t always indicative of the content.
7. Go to sessions with ambiguous titles, they are usually the new stuff/undercover stuff
8. The Daily Grill (restaurant attached to the Sheraton) is the most expensive place in the world
9. Plan flights with more time in between if connecting. We missed our connecting flight because of delays.
10. Put the thing on your door from Sheraton to not upkeep your room, you get 5 $ coupon a day
11. Wifi sucks, use your 3G or EVFO/Mifi or whatever
12. They keynotes each day can be hit or miss, but still make sure to go to all of them.
13. Leave sessions 2 min early if you want to get a good seat at a highly anticipated next session
14. Take the back stairs from level 6 to level 4 instead of the escalator for a shortcut
15. “game” gameworks – my team knows what I mean :)
16. Bush Garden is small. But you can still rock #sqlkaraoke . Just don’t sit on the couch in the bathroom.
17. You can walk pretty much anywhere (Pike St, Space Needle, etc)
18. Go talk to vendors, but don’t let them suck you in. Stick mainly to the PASS area and Microsoft area (at least that is what I think)
19. Try to take advantage of the games and contests held throughout (Foursquare, Vendor check ins, Wheel of SQL) but don’t let it consume you.
20. If you go into a session that you find boring or not relevant, don’t feel afraid to leave and go to another – the sooner the better.
21. Order the DVD’s
22. Take advantage of BOF luncheon.
23. Follow the twitter stream .. #sqlpass
24. Tweet your encounters and info
25. Blog about it after :)

So much more to talk about, I will try to decompress it all and throw up a few posts, but yeah, well worth it, so much information and people and just thinking outside the box, and it just gets you to think and become motivated. Great time!


SQL DBA: Starting Fresh, What do you do?

If you start at a new place, as a SQL Server Database Administrator (DBA), what is one of the first things you should do? In my opinion, after figuring out the key servers and instances running you need to support… is setting up alerts.

By setting up alerts you can start to get an idea of what is not working and start focusing on things that are failing, etc first. All the while you can still check on backups and getting everything else set up and working, but if you don’t have alerts, well, you are blind.

Alerts should tell you..

1) When a physical server is down (network)
2) When backups fail
3) When jobs fail
4) When logins fail
5) I/O issues
6) the “critical” 14, 15, 16, 17
7) crazy cpu and memory issues
8) services going up and down
9) if your SAN is up/down
10) Hard drive getting close to 100%

and that is just the beginning. What other alerts should DBA’s set up *right away* to make sure they are on top of things?


Exporting Data from SQL Server to Excel Directly

Had a request to export data out from SQL Server to Excel directly. There are a few ways you can do this. BCP is one, another is OPENROWSET. SSIS, etc.  Here is the OPENROWSET method, using Jet (I think this only works on 32bit servers as well)

First you need to enable a setting on SQL, so.. Start->Programs->Microsoft SQL Server 2005->Configuration Tools->SQL Server Surface Area Configuration, then click on the “Surface Area Configuration for Features” at the bottom.

image
Under database engine, make sure “Enable OPENROWSET and OPENDATASOURCE support” is checked under Ad Hoc Remote Queries option.

image

Then, you need to create an empty excel workbook, with the columns of the query you want to export out. Create one on C:Testing.xls with “Name” and “Rating” columns

Then run this query:

INSERT INTO OPENROWSET('Microsoft.Jet.OLEDB.4.0', 
'Excel 8.0;Database=C:testing.xls;', 
'SELECT Name, Date FROM [Sheet1$]') 
SELECT 'Steve','1'
Union
SELECT 'Joel','2'
GO

And if everything worked, your xls should have 2 rows in it.

 

Now, if you want output to other spreadsheets, you could dynamically set the filename in the statement, or other trickery!

Where does PowerPivot Fit?

Now that SQL Server 2008 R2 is out, and Excel 2010 is out. You can get PowerPivot (http://powerpivot.com/) and create your own in memory cubes!

…. Or something like that.

I still haven’t figure out where PowerPivot fits in a business scenario. Why?

Well first you have what are now being called the “old school” BI users, that use Excel to connect to an SSAS cube and create fancy pivots and reports, maybe convert to formulas and create some nice reports/dashboards.. analytics.

Then you have people who only consume canned/standardized reports, through SSRS mostly, or maybe Excel Web Services… but they don’t create. Just consume.

You might even have power users, who take Report Builder and create those SSRS reports for other users. Awesome.

But then, you have this tool, PowerPivot. What can you do? Hit databases (mostly… cubes and other sources as well), bring back data, relate it, and create pivot reports/graphs off of it.

But you better be pretty dang advanced as a business user to use PowerPivot. I could count on one hand the users (that I have dealt with over the last 10 years) I would feel comfortable giving it to and not ending up with more of a headache.

What do I think is still missing from the Microsoft BI toolset? Looking at Business Objects, the Web Intelligence. Universes. You create a Universe off of a datasource and expose it out to the user, they can create reports/ad-hoc whatever off of it.

Kind of like the ever elusive “Report Model” in the Microsoft stack that no one ever uses, ever will use, or has no reason to use. But in BO, they make it useful.

I don’t see PowerPivot taking the place of a Report Model/Universe, so where does it fit? IT Analysts making “pre” cubes before you actually make cubes for your users that just want to hit it with Excel and not care about anything else?

Or people who just want to create their own cubes in silos. Tell me how that lends itself to “one version of the truth”?

Either way, we will see how it evolves and hopefully find some good use for it. :)

SQL Schema Source Control (CodePlex)

Source Control. In my eyes, one of the best inventions of development.

Software Developers have used it for years, and it allows them to easily develop in a team environment, and be less scared they will lose a change or not be able to see things they did historically.

I have blogged a few times about source control before..

SQL Server Schema Automatic Revision History using DDL Triggers and SVN
MSFT BI In a Team Environment
Visual Source Safe Sucks/
Source Control at Home with Subversion

But this was mostly for code. Developers. The SQL Community has kind of been shafted with source control. Yea you can tie in VSS to Management studio, and others. I have never found one that works, and just wanted something to work in the background.

I originally started doing this with DDL triggers as my <a href="“>post in November kind of outlined. It worked, but wasn’t reliable enough for what I wanted and was too much setup.

So I did what developers usually do, I wrote an app. SQL Schema Source Control http://sqlschemasourcectrl.codeplex.com/

At first it worked with one database, one server, everything was hardcoded. And then it progressed, and now it is all configurable for multi server/multi database, etc. I decided to put it up on CodePlex because I think it can be improved and made to work with other source control providers, like TFS.

The code itself isn’t anything crazy, some file operations and SMO operations to get the DDL and then some functions to add/update/delete and commit to source control.

For info on how to get it working, check out the documentation page on CodePlex, I can also answer any questions here, or on CodePlex.

I have been using the app for a few months now and it has saved headaches, accidental deletes, wanting to see changes over time by developers, etc.

The feature I like the most is that it logs the SQL Agent job changes, so if someone changes a job, you can see the history…

Now, there are competitiros out there. Redgate (http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm) and others. But I wanted something free and open, so that is why I am putting this out there. I would be perfectly fine not putting anything out and just using it, but I think (and hope) others could benefit from using this app

So if you are looking for semi-easy way to get revision control on your SQL Schemas and SQL Agent jobs, check out the app. There is minimal setup, but once you have it working, it just runs.

Note at this time it works with SVN and SQL 2008. Also the solutions is VS2010. I originally had it working with 2005, but no need on my end anymore for that, someone could easily make a version for SQL 2005. VS2008 solution could be created pretty easily as well. Have fun!