Tag Archives: SQL

#sqlpass Summit General Thoughts

Was at #sqlpass last week. Great conference/trainings, and people. Hopefully I will go next year (I am planning on it!) General thoughts/overview..

1. Should have done pre/post cons
2. Drink more water (funny thing is, no water during refreshment breaks? just juice/coffee/soda)
3. Leave room in your luggage for schwag
4. Bring more business cards.
5. Introduce yourself to more people
6. Realize that the level #’s on the courses aren’t always indicative of the content.
7. Go to sessions with ambiguous titles, they are usually the new stuff/undercover stuff
8. The Daily Grill (restaurant attached to the Sheraton) is the most expensive place in the world
9. Plan flights with more time in between if connecting. We missed our connecting flight because of delays.
10. Put the thing on your door from Sheraton to not upkeep your room, you get 5 $ coupon a day
11. Wifi sucks, use your 3G or EVFO/Mifi or whatever
12. They keynotes each day can be hit or miss, but still make sure to go to all of them.
13. Leave sessions 2 min early if you want to get a good seat at a highly anticipated next session
14. Take the back stairs from level 6 to level 4 instead of the escalator for a shortcut
15. “game” gameworks – my team knows what I mean :)
16. Bush Garden is small. But you can still rock #sqlkaraoke . Just don’t sit on the couch in the bathroom.
17. You can walk pretty much anywhere (Pike St, Space Needle, etc)
18. Go talk to vendors, but don’t let them suck you in. Stick mainly to the PASS area and Microsoft area (at least that is what I think)
19. Try to take advantage of the games and contests held throughout (Foursquare, Vendor check ins, Wheel of SQL) but don’t let it consume you.
20. If you go into a session that you find boring or not relevant, don’t feel afraid to leave and go to another – the sooner the better.
21. Order the DVD’s
22. Take advantage of BOF luncheon.
23. Follow the twitter stream .. #sqlpass
24. Tweet your encounters and info
25. Blog about it after :)

So much more to talk about, I will try to decompress it all and throw up a few posts, but yeah, well worth it, so much information and people and just thinking outside the box, and it just gets you to think and become motivated. Great time!


SQL Schema Source Control (CodePlex)

Source Control. In my eyes, one of the best inventions of development.

Software Developers have used it for years, and it allows them to easily develop in a team environment, and be less scared they will lose a change or not be able to see things they did historically.

I have blogged a few times about source control before..

SQL Server Schema Automatic Revision History using DDL Triggers and SVN
MSFT BI In a Team Environment
Visual Source Safe Sucks/
Source Control at Home with Subversion

But this was mostly for code. Developers. The SQL Community has kind of been shafted with source control. Yea you can tie in VSS to Management studio, and others. I have never found one that works, and just wanted something to work in the background.

I originally started doing this with DDL triggers as my <a href="“>post in November kind of outlined. It worked, but wasn’t reliable enough for what I wanted and was too much setup.

So I did what developers usually do, I wrote an app. SQL Schema Source Control http://sqlschemasourcectrl.codeplex.com/

At first it worked with one database, one server, everything was hardcoded. And then it progressed, and now it is all configurable for multi server/multi database, etc. I decided to put it up on CodePlex because I think it can be improved and made to work with other source control providers, like TFS.

The code itself isn’t anything crazy, some file operations and SMO operations to get the DDL and then some functions to add/update/delete and commit to source control.

For info on how to get it working, check out the documentation page on CodePlex, I can also answer any questions here, or on CodePlex.

I have been using the app for a few months now and it has saved headaches, accidental deletes, wanting to see changes over time by developers, etc.

The feature I like the most is that it logs the SQL Agent job changes, so if someone changes a job, you can see the history…

Now, there are competitiros out there. Redgate (http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm) and others. But I wanted something free and open, so that is why I am putting this out there. I would be perfectly fine not putting anything out and just using it, but I think (and hope) others could benefit from using this app

So if you are looking for semi-easy way to get revision control on your SQL Schemas and SQL Agent jobs, check out the app. There is minimal setup, but once you have it working, it just runs.

Note at this time it works with SVN and SQL 2008. Also the solutions is VS2010. I originally had it working with 2005, but no need on my end anymore for that, someone could easily make a version for SQL 2005. VS2008 solution could be created pretty easily as well. Have fun!

SQL Server Schema Automatic Revision History using DDL Triggers and SVN

SQL 2005 introduced DDL Triggers, where you could catch events on your DDL statements in SQL, and hopefully most DBA’s are catching them, at least to a table and have some kind of report on who is adding, changing and deleting what objects.

What I wanted to do was capture that, but also keep and automatic running log in SVN (subversion) source control. Here is how I went about it.

First, you need SVN set up somewhere. We use unfuddle (http://unfuddle.com/) – which we also use for Agile BI stuff, but that is another post :) and unfuddle also let’s us do SVN in the cloud, but you could do it locally or internally or whatever you’d like. I had to create a user for this, to do automatic commits, and give it rights to commit to the repo.

Second, you probably already want your DDL triggers set up to capture events on your databases, and write to a table. Writing to central DB on a server is probably a good idea. If you have that, or something similar, I then created a trigger on that table, to capture the INSERTED action and take that record that was just inserted and parse what I need, and automatically commit to SVN.

But first, you want to script out all your objects for a database to a folder, say C:DBSchemas

I created a folder structure to help me

DBSchemasServerNameDatabaseNameObjectType

and then scripted out all the objects to each folder for ObjectType (Tables, StoredProcedures, Functions, and Views)

Once that was done, I did an initial import into SVN, and make the folder I was working with a working copy. Then the funs starts.

I created a couple of procs (which I found online, links to the blogs are below) to CREATE and DELETE files from T-SQL using OLE automation

CREATE/APPEND (http://sqlsolace.blogspot.com/2009/01/ole-automation-write-text-file-from.html)

CREATE PROCEDURE [Utils].[usp_OLEWriteFile] (@FileName varchar(1000), @TextData NVARCHAR(MAX),@FileAction VARCHAR(12)) AS

BEGIN
DECLARE @OLEfilesytemobject INT
DECLARE @OLEResult INT
DECLARE @FileID INT

EXECUTE @OLEResult = 
 sp_OACreate 'Scripting.FileSystemObject', @OLEfilesytemobject OUT
IF @OLEResult  0 
  PRINT 'Error: Scripting.FileSystemObject'

-- check if file exists
EXEC sp_OAMethod @OLEfilesytemobject, 'FileExists', @OLEresult OUT, @FileName 
-- if file esists
IF (@OLEresult=1 AND @FileAction = 'APPEND') OR (@OLEresult=0) 
BEGIN   

IF (@FileAction = 'CREATENEW')
 PRINT 'New file specified, creating...'
IF (@OLEresult=1 AND @FileAction = 'APPEND') 
 PRINT 'File exists, appending...'
IF (@OLEresult=0 AND @FileAction = 'APPEND') 
 PRINT 'File doesnt exist, creating...' 

 -- open file
 EXECUTE @OLEResult = sp_OAMethod @OLEfilesytemobject, 'OpenTextFile', @FileID OUT,
 @FileName, 8, 1
 IF @OLEResult 0 PRINT 'Error: OpenTextFile'

 -- write Text1 to the file
 EXECUTE @OLEResult = sp_OAMethod @FileID, 'WriteLine', Null, @TextData
 IF @OLEResult  0 
  PRINT 'Error : WriteLine'
 ELSE
  PRINT 'Success' 
END
IF (@OLEresult=1 AND @FileAction = 'CREATENEW')
 PRINT 'File Exists, specify APPEND if this is the desired action'

EXECUTE @OLEResult = sp_OADestroy @FileID
EXECUTE @OLEResult = sp_OADestroy @OLEfilesytemobject

END
GO

DELETE (http://www.kodyaz.com/articles/delete-file-from-sql-server-xp-cmdshell-ole-automation-procedures.aspx)

DECLARE @Result int
DECLARE @FSO_Token int

EXEC @Result = sp_OACreate 'Scripting.FileSystemObject', @FSO_Token OUTPUT
EXEC @Result = sp_OAMethod @FSO_Token, 'DeleteFile', NULL, 'C:delete-me-file.txt'
EXEC @Result = sp_OADestroy @FSO_Token

You need to make sure OLE Automation is on. You need to make sure that the account you are running SQL as has modify rights to your DBSchemas folder.

But the crux of the solution is the trigger that gets the DDL info, and writes/deletes the files and SVN Add/Del/Commit’s the file. Now this is some ugly 1 hour SQL script craziness, tons of IF statements, etc. It could be improved, but it works, and it is a start, it can be modified and tweaked to do whatever you want. Note, if your SVN repo isn’t authenticated you don’t need the username/password for the SVN commands.

You can see, it gets the DDL, checks the events (and I have it limited to one database), and it checks what type of object and what operation, and for and add, it adds and commits, for a updated, deletes file, recreates it, and commits, and for a delete it does and svn delete and commit. Pretty easy :)




CREATE TRIGGER DDLRevisionHistory
	ON dbo.DDLEventLog
	AFTER INSERT
AS

BEGIN
SET NOCOUNT ON;

DECLARE @EventType VARCHAR(50)
DECLARE @DatabaseName VARCHAR(50)
DECLARE @ServerName VARCHAR(50)
DECLARE @ObjectName VARCHAR(100)
DECLARE @SchemaName VARCHAR(10)
DECLARE @CommandText VARCHAR(MAX)
DECLARE @LoginName VARCHAR(50)

SELECT 
	@EventType = EventInstance.value('(//EventType)[1]', 'varchar(50)'),
	@DatabaseName = EventInstance.value('(//DatabaseName)[1]', 'varchar(50)'),
	@ServerName = EventInstance.value('(//ServerName)[1]', 'varchar(50)'),
	@ObjectName = EventInstance.value('(//ObjectName)[1]', 'varchar(50)'),
	@SchemaName =EventInstance.value('(//SchemaName)[1]', 'varchar(50)'),
	@CommandText = EventInstance.value('(//TSQLCommand//CommandText)[1]', 'varchar(max)'),
	@LoginName = EventInstance.value('(//LoginName)[1]', 'varchar(50)')
	FROM inserted

DECLARE @filepath VARCHAR(8000)
	
SET @filepath = 'C:DBSchemas' + @ServerName + '' + @DatabaseName + '' 

	IF (
		@EventType = 'CREATE_VIEW' OR @EventType = 'ALTER_VIEW' OR @EventType = 'DROP_VIEW'
		OR @EventType = 'CREATE_TABLE' OR @EventType = 'ALTER_TABLE' OR @EventType = 'DROP_TABLE'
		OR @EventType = 'CREATE_PROCEDURE' OR @EventType = 'ALTER_PROCEDURE' OR @EventType = 'DROP_PROCEDURE'
		OR @EventType = 'CREATE_FUNCTION' OR @EventType = 'ALTER_FUNCTION' OR @EventType = 'DROP_FUNCTION'
		) 

		AND @DatabaseName = 'YourDatabase' BEGIN


		-- write out new file to correct folder
		IF CHARINDEX('VIEW',@EventType) > 0 BEGIN 
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.View.sql'
		END
		
		IF CHARINDEX('TABLE',@EventType) > 0 BEGIN 
			SET @filepath = @filepath + 'Tables' + @SchemaName + '.' + @ObjectName + '.Table.sql'
		END

		IF CHARINDEX('PROCEDURE',@EventType) > 0 BEGIN 
			SET @filepath = @filepath + 'StoredProcedures' + @SchemaName + '.' + @ObjectName + '.StoredProcedure.sql'
		END

		IF CHARINDEX('FUNCTION',@EventType) > 0 BEGIN 
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.UserDefinedFunction.sql'
		END

		IF CHARINDEX('CREATE',@EventType) > 0 BEGIN
			
			-- create file
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'
			
			-- svn add
			DECLARE @instrAdd VARCHAR(4000)
			SET @instrAdd='svn add ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrAdd

			-- svn commit
			DECLARE @instrCommitAdd VARCHAR(4000)
			SET @instrCommitAdd='svn commit ' + @filepath + ' --message "added by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitAdd

		END
		
		IF CHARINDEX('ALTER',@EventType) > 0 BEGIN
			
			--delete and readd file
			EXEC dbo.usp_OLEDeleteFile @filepath
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'
			
			-- svn commit
			DECLARE @instrCommitChange VARCHAR(4000)
			SET @instrCommitChange='svn commit ' + @filepath + ' --message "changed by '+ @LoginName + '" --username dbschema --password yourpassword'
			--PRINT @instrCommitChange
			EXEC xp_cmdshell @instrCommitChange
		END

		IF CHARINDEX('DROP',@EventType) > 0 BEGIN
			-- svn delete
			DECLARE @instrDel VARCHAR(4000)
			SET @instrDel='svn delete ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrDel

			-- svn commit
			DECLARE @instrCommitDel VARCHAR(4000)
			SET @instrCommitDel='svn commit ' + @filepath + ' --message "deleted by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitDel
		END

	END

END

as you can see you can create a homegrown revision history of your DDL objects in SQL . I have tested this on the basic operations, no renames, etc using the GUI, but if you do use it, you might want to wrap it all in exception handling just to be on the safe side.

Happy DBA’ing :)

SQL Server Agent – Query To Find Long Running Jobs

I use SQL Agent a ton. It is like my go to place to schedule jobs and tasks. Problem is, there are pieces “missing”. Like if a job hangs, you can have it auto stop after a given amount of time. It doesn’t alert on changes to a job, etc, etc.

I asked on twitter what the DBA’s think, @SQLRockstar thinks using OpsMgr is the way, which probably would be great, if I had it. I need a quick and dirty solution.

What I did was query the sysjobs, sysjobactivity, and sysjobhistory tables to get some kind of “look” into the running jobs. That way, if a job that is normally 45 minutes has been running for 5 hours, I should be able to catch it early on, just trying to be more proactive instead of reactive.

SELECT 
	j.job_id AS 'JobId',
	name AS 'JobName',
	start_execution_date AS 'StartTime',
	stop_execution_date AS 'StopTime',
	avgruntimeonsucceed,
	DATEDIFF(s,start_execution_date,GETDATE()) AS 'CurrentRunTime',
	CASE WHEN stop_execution_date IS NULL THEN 
			DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END 'ActualRunTime',
	CASE 
		WHEN stop_execution_date IS NULL THEN 'JobRunning'
		WHEN DATEDIFF(ss,start_execution_date,stop_execution_date) 
			> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
		ELSE 'NormalRunning-History'
	END 'JobRun',
	CASE 
		WHEN stop_execution_date IS NULL THEN
			CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
						> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
			ELSE 'NormalRunning-NOW'
		END
		ELSE 'JobAlreadyDone'
	END AS 'JobRunning'
 FROM msdb.dbo.sysjobactivity ja
	INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
INNER JOIN (
	SELECT job_id,
	AVG
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
	+
	STDEV
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
	 FROM msdb.dbo.sysjobhistory
	WHERE step_id = 0 AND run_status = 1
	GROUP BY job_id) art 
	ON j.job_id = art.job_id
WHERE 
(stop_execution_date IS NULL) OR 
	(DATEDIFF(ss,start_execution_date,stop_execution_date) > 60
	AND 
	CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME))
ORDER BY start_execution_date DESC

The inner query looks at the history to get the average runtime, and I added the standard deviation to that to make it more realistic, in case you have a one off that throws the average off. Of course this all depends on you having more than 3 entries in your job history as well.

I then just compare the average to the current runtime (if the job is executing) and to the actual (if it is complete).

You could use this query in a report, to view throughout the day to catch long running jobs, you could also use it to just give you the one that are currently running and over the time threshold, and alert you, so you can catch them right away.

I tested this on a few servers running SQL 2005, but I am guessing it will work on SQL 2008 as well. Find and kill those long running jobs!!

T-SQL: Using CROSS APPLY to Turn 2 Queries Into 1

Parent/Child. Order/Line. Header/Detail. Report/Sub-Report. We have all ran across these database designs and have had to write queries off of them. You usually end up having the parent id, and you need to get the children. Or you have a list of parents and need to loop through them, looking up the children records and doing something with them. But what if you just want a list of the children id’s (or names, or whatever). Do you really need to loop through the parents and lookup all the children, (and possibly look through those)? You can do JOIN’s and you can get the data in a tablular format, but how do you rollup those children records?

Using the AdventureWorks DB in SQL 2005, an example using Manager/Employee:

SELECT DISTINCT mgr.ManagerId, e.EmployeeId
	FROM HumanResources.Employee mgr
	INNER JOIN HumanResources.Employee e ON mgr.ManagerId = e.ManagerId

Results:

image

But really we want to rollup those employees, ending up with one manager/employee record, ex: 3, [4,9,11,158,263,267,270] … for this, try CROSS APPLY

SELECT DISTINCT ManagerId, Employees = LEFT(emp.list, LEN(emp.list)-1) 
	FROM HumanResources.Employee mgr
	CROSS APPLY
	(
	SELECT CONVERT(VARCHAR(4),EmployeeId) + ',' AS [text()] 
		FROM HumanResources.Employee e
		WHERE mgr.ManagerId = e.ManagerId
	
		ORDER BY EmployeeID
		FOR XML PATH('')
	) emp (list)
WHERE mgr.ManagerId IS NOT NULL

Results:

image

As you can see from the results, we rolled up our employees into one record per manager, into a comma delimited list. Think of some possibilities of using CROSS APPLY in your apps or stored procs/reports to reduce the number of queries you might have to write, or number of trips to the database you might have to do. Happy T-SQL’ing :)

SSAS 2005/2008: Creating Sub-Cubes Using XMLA, Variables, and Named Query Where Statements

I blogged a few weeks ago about creating Local Cubes with XMLA and ascmd.exe, and that is pretty cool, works great. There are some snags though if you use a server that is 64 bit. You can’t connect to the local cubes via .NET, Excel, or any other way. There are no drivers for local .cub files for 64-bit Windows. What a downer, and I am not going to wait around for them to come out. So in the mean time, you can easily create “sub cubes” as well in SSAS.

One easy way to create sub cubes is using this method. If you have one dimension that all your measures relate to (or two, usually they all relate to date), but say one dimension that relates to everything is Location, DimLocation. And you want to create sub cubes based on a set of locations. Well what I did was this.

1) In my Data Source View (.dsv) in my SSAS solution, I replaced all views/tables with Named Queries. (you will see why later)

2) Once I have my “main” cube deployed (and processed if you like, doesn’t matter), I need to create an XMLA to create a sub cube.

b4127dbd2c0563eb093464d312c87269

3) In the newly created XMLA you have to edit some things

a) I edit the Database name, and Id

b) search for msprop:QueryBuilder="SpecificQueryBuilder" – this is where your named queries for all your tables are. You can go ahead and add a where statement to each one (eg: WHERE LocationId IN (1,2,3))

c) I usually wrap the whole XMLA in a Batch, and then at the bottom after the </Create> I put a process XMLA

  
    ProcessFull
    
  

4) Run your XMLA and it should create and process a new SSAS db, creating a pre-sliced version of your original cube.

 

Now, there are other ways to do all this stuff, one of them being <Filter> in XMLA, but I couldn’t get it to work the way I wanted, that is why I went this route, and it just so happens that I am lucky enough for one dim to relate to all measures :)

Using CTE’s to Create Dynamic Pivot Tables in SQL 2005/2008

One of the cool new commands introduced in SQL 2005 was the PIVOT command (and UNPIVOT). One thing that has always irked me about PIVOT was that you need to “hard code” the column names you want to actually pivot on. This is usually fine for a static report, say you want to pivot on Apples, Oranges, Bananas. Great. But what if you want to pivot on Month/Year? “Apr 2009”, “May 2009”, etc, etc. Still ok, if you want to hardcode your month years and they never change. But in some cases, you need to keep adding the current “Month Year” combo to the query.

You can go the old fashioned route and create a dynamic SQL statement and do an EXEC (@dynamicSql) , or you can try to figure out a way to do your PIVOT dynamically.

What I did here is create a dataset and added a numeric ordering and then pivoted on that instead of the the actual value “Month Name Year” (eg: “May 2009”). That way, my pivot is always the same, 6 months worth, or whatever you like, and you can pass in parameters for the date filters in the top query to get your correct range.

-- get your distinct month/year vals into a temp table
-- need temp table here because CTE doesnt like a subquery and row_number
SELECT DISTINCT CalendarMonthYearName, MonthOfYear,CalendarYear
      INTO #tmp_monthyears
      FROM dbo.DimDate 
      WHERE Date > '02/01/2009' AND Date < '07/01/2009'
      ORDER BY CalendarYear DESC,MonthOfYear DESC;

-- create a CTE that adds a "Row Number" which is number 1-6
WITH CalMonthYears AS
(
-- add your number
SELECT CalendarMonthYearName AS 'CalendarMonthYearName',
      MonthOfYear,
      CalendarYear,
      ROW_NUMBER() OVER (ORDER BY CalendarYear DESC) AS 'YearMonthNum'  
      FROM  #tmp_monthyears

)
SELECT ItemNumber,
      [1],[2],[3],[4],[5],[6]
FROM
(
-- your query to get your data, pivot this data
SELECT YearMonthNum,s.ItemNumber,
      SUM(Quantity) AS 'Quantity'
      FROM DataHistory s
      INNER JOIN dbo.DimDate d ON s.DateKey = d.DateKey
      INNER JOIN CalMonthYears cmy ON d.CalendarMonthYear = cmy.CalendarMonthYearName
GROUP BY YearMonthNum,s.ItemNumber
) AS SourceTable
PIVOT
(
SUM(Quantity)
FOR YearMonthNum IN (
      [1],[2],[3],[4],[5],[6]
      )
) AS PivotTable;

DROP TABLE #tmp_monthyears

The sky is the limit with this, you can modify this or think of other ways to use this logic to make your PIVOT commands dynamic, so you don’t have to keep editing stored procedures every time your pivot columns change :)

The problem isn’t SQL Server. It’s you.

Throughout all my years in different places, I have seen SQL, Oracle, Firebird, MySQL, DB2, Zortec, Access, and probably a few other crazy databases set up and run, and administered. Of course most of them along the way have been Microsoft SQL Server, (6.5, 7, 2000, 2005, 2008). I’ve worked with some knowledgeable DBA’s, and in those cases everything usually turns out ok.

But sometimes, in some department or place or whatever, your buddy down the street wanting to start a new company, your girlfriends place of work that wants to track orders, whatever, they usually try to get SQL Server running, and what sometimes happens next just makes my head spin. Microsoft, bless them, sometime in the past, not so much now, tried to market SQL Server as “self manageable”. Probably sometime between 6.5 and 7, they tweaked some update stats routines and schedules and its all good, right? Set autogrow by default, and you are good to go. Wrong.

What this awesome marketing strategy did, was get people, places, and organizations, mostly ISV’s to use SQL Server and install it, get their app running, and walk away. Of course it runs for a while, runs like a champ even. But then months, even years go by and the system starts running slow. There is no DBA around, they didn’t need one, SQL Server manages itself! Wrong again.

What you might end up with though, are people using the system that might know a little bit, enough to be deadly even, and they start making changes, when in reality you need a full fledged DBA to manage your server, and database, hence the name DBA (database administrator). But before the DBA comes on to the scene to save the day, you will have the people that blame SQL Server. “Oh SQL Server doesn’t work at all, it can’t perform’”… or “Our other databases run 10x as fast, what gives” (not mentioning they have 3 DBA’s for those “other” databases, but not for MS SQL). and the quotes keep coming.

That is why the title of this post is what it is.

The problem isn’t SQL server, it’s you

. If you fail to realize that MS SQL is an Enterprise class database system, and treat it like some out of the box, already configured, plug and play system, you are going to run into issues eventually. You need a DBA. Probably best to have one BEFORE you implement any system, even if it is a consultant to guide your implementation, and assist as time goes on.

I sometimes get tired trying to argue that MS SQL can hold its own against Oracle, DB2, whatever. Trust me, it can. I could probably go find tons of SQL DBA’s that would back me up as well. It is all about how you manage and administer it! SQL Sever does just fine, as long as you know what you are doing. Just like any system. I think sometimes that if we took SSMS away, and just made everything cmd line/scripting, that people “outside” of the MS SQL community would see how MS SQL works in compared to their own systems.

This post isn’t meant to be a beat down rant or anything, but the same things can be said for .NET compared to Java, C++, etc, or whatever. It just seems sometimes that people that live and breathe Microsoft SQL need to know what the other RDMS/BI systems are capable of, but for some reason the same isn’t true for people that use the other systems. They kind of just brush MS SQL off as a play toy, something that shouldn’t be taken seriously, a “hobbyist” SQL system. Something that any enterprise wouldn’t be caught dead running, that is of course, unless you are Microsoft. :)

I’m still hedging my bets on MS SQL and .NET, I haven’t seen anything better for the price and ease of use, and the best part about it, the community. The MS SQL and Development community is huge compared to anything else, and to me that just puts the icing on the cake. Just remember the next time someone who needs a MS SQL DBA but doesn’t have one complains about performance of their system, you can tell them it’s not SQL Server’s fault, it’s probably the lack of neglect to SQL Server that caused the problems.

SQL Server Management Studio and Excel – Column Headings on Copy/Paste

Here is a tip about a setting that I think should be on by default, but it isn’t. How many times are you writing T-SQL in SQL Server Management Studio (SSMS) and you need to just copy the results out, paste to Excel and either do more analysis or send off to someone. What happens when you have a result set with 20 columns? 30? 50?

Well, the old way to get the column headings in Excel was to just type them in. Ouch. This gets old realllly fast. But there is a setting in SSMS that most people don’t even know exists, and then once they find out about it, they are like, dang! I wish I would have known about that.

Well, fire up SSMS, Tools->Options, Query Results, SQL Server, Results to Grid. Then check the “Include column headers when copying or saving the results” Hit OK, and the any new query window you have will automatically copy the column headings from your result set with the data!

 

image

Random Things around the Net Week of 02/14/2009

Yammer iPhone app goes Open Source – Yammer is a kind of Twitter clone, but for your company. I believe it is made by the same people that created Geni.com. Pretty cool that it is open source, just another way to get a good idea on how to build iPhone apps. WordPress has their app open source as well.

iusethis.com – stumbled upon this site, good way to find nice apps for mac, iPhone, windows, see what people are using.

Best Website Monitoring/Performance Tool – from Mahalo.com – if you manage web servers or are a server admin, read it.

wwwsqldesigner – if you want to make SQL ERD diagrams through your browser, check it out.

Feed Flix – if you use Netflix, check out this site. You link it to your account, then it gathers metrics on everything you do with Netflix, really eye opening. Things like tell you your average cost per dvd rental, an dhow long you keep titles rented out, etc.

Pismo File Mount – ever wanted to mount a zip file as a drive? Check this app out.