Category Archives: Business Intelligence

meetings_are_toxic

Using Power Query to Analyze Your Schedule

I am in a lot of meetings. A LOT of meetings. Double, triple, quad booked. I guess when you get to manager or director level somewhere, that is the definition of “busy”, or maybe everyone just wants you in their meeting, or your opinion, or whatever. In the end “Meetings are Toxic” (from 37signals), but really the are sometimes a necessary evil.

Anyways, do you really know where you spend all your time? Well you can glean the information pretty easily using Excel and Microsoft Power BI (Power Query specifically).

First, the key for me is to “categorize” my meetings. You can create categories in Outlook and then assign them to meetings, you can even color code the categories.

Where does Power Query fit in? Well, you can connect to Exchange as a data source.

power query exchange

Then you can query your calendar “table”, and pull it into Excel.

power query navigator

power query

Then, as with any table, you can Pivot it, and pull over category as the row, and look at the count. With some column work in the Power Query query, you can split out the date/time and get Month/Day/Year and create a semi-hierarchy, to see things over time.

MeetingMonth

For example, I took over 2 teams in January, and my meetings with them and related projects skyrocketed in January. Now I know what was taking my time up for Q1 2014 :)

meetings over timeAt least the number is going down :)

There is so much more you can do with Power BI and Exchange data, your email, calendars, contacts, etc, this is just the tip of the iceberg, and it should only take you 10 minutes or so to get to this result! Now, if I can just figure out how to get out of the meetings!

ColdOutside

How Cold Is It?

With the latest “Polar Vortex” or whatever that is happening, EVERYONE is talking about the weather. Everyone always talks about how it has been this cold many times, etc, etc. “It was colder in my day” – ok. Well prove it!

So I took a look at the NOAA data you can get here http://www.ncdc.noaa.gov/cdo-web/ and got an extract to CSV for my hometown of Chisholm, MN (actually the Hibbing/Chisholm airport since it has data from 1962 to today)

I downloaded the CSV, opened in Excel 2013 and imported into Power Query. I think did some formatting to get the date parts and a date field, and converted the “tenths of a degree of Celsius” to Fahrenheit. Then started analyzing.

I will have to refresh this after this cold spell, because it only has data to 1/1/2014 and these last few days have been cold, but, not the coldest.

Back when I was 16, in 1996, there was a stretch of days in January that were COLD. The data supports this. First I took all the days with a Low temp of UNDER -35 degrees F.

Chisholm Low Temps

 

You can see, there are a bunch of days in Jan/Feb 1996 that were UNDER -35 Degrees F. So then I copied that pivot and expanded on that date range to see all the days.

JanFebChisholmLowTemps

 

Pretty dang cold from 1/19/1996 to 2/4/1996. Lowest day was -50 Degrees F. Average of -31 Degrees F. Of course these are “real” temps, it was even colder with wind chill. These last 3-4 days of -20 to -40 are cold, but not sure they are colder than in Jan 1996. We will see when it is all said and done.

If you can’t remember how cold it was, NOAA, Excel and Power Query can remind you. :)

I have the spreadsheet up on Skydrive. http://sdrv.ms/1gBwMPL

Day 3 and Overall Impressions #sqlpass #summit12

To conclude my posts on the PASS Summit this year (see day 1 review and day 2 review), I want to go over the last day and then talk a little about the entire conference and my takeaways.

On Friday I attended three sessions. The last two of the day, the second to last one I was on the phone and missed the beginning and then decided to check out all the other things before they were done, and the last session slot, there wasn’t much appealing, and most everyone already left, so I skipped it.

1. CLD-303-A SQLCAT: What are The Largest Azure Projects in the World?

This was given by Kevin Cox and a another SQLCAT member. The SQLCAT team is crazy smart. If you can talk to any of them, you need to. Any chance you get. This was a good view of customers they have dealt with that are pushing SQL Azure to the limits. Since we are running a project now that we are going to be pushing SQL Azure (and Azure) hard, I thought this was good.

So customers have 20TB dbs, 10k databases, so it for sure can scale. Also some good tips/tricks on what you can do to use SQL Azure to the max like the other customers

2. BIA-203: Real-Time Data Warehouse and Reporting Solutions

I wasn’t sure on this one. Carlos Bossy gave a couple of presentations, and he seemed to know what he was presenting, but a topic like this is so situational it is tough to make it generic. Also, there isn’t a “huge” need for real time and I think I wouldn’t implement it the way he was saying anyways. Run SSIS 24/7 with a for loop that never ends? That is crazy. I’d rather pump data through something like StreamInsight with some code than that SSIS solution. Or run things every couple of minutes or something. “near real time”. Also his solutions was using replication which is fragile.

3. BIA-402-M: Optimizing Your BI Semantic Model for Performance and Scale.

Probably one of the better sessions. Again, Microsoft guys letting it all out here. Akshai Mirchandani and Allan Folting from Microsoft. Basically going *in depth* on how PowerPivot and Tabular does what it does with columnar compression, etc. Where you can look and dig under the hood to find ways to make small changes and optimize processing or querying depending on your need. This is a session I want my entire BI team to watch together.

Overall Takeaways Technically:

Azure, Hadoop, Tabular, Power View, BI, DAX, Excel. You can see a pattern here. I am sure there were good “DBA” and DB Dev sessions but I didn’t go to any. BI is taking shape with Microsoft’s strategy and it is all tabular/excel azure/hadoop stuff. Exciting times.

Overall Thoughts of This Year’s Summit and SQL PASS

Guidebook – was ok. I thought it could have been better. I have used before at conferences. Why no native Windows Phone app?

New Layout - the last two years I was at the summit, things were laid out (as far as where things were) pretty much the same. This year it was changed up. Took a day to get the “lay of the land”

Keynotes – kind of the same as usual. I mentioned in by part 2 blog about how the blogger/twitter table needs to grow up, just want to say that again here. First day there was some drama, second day more drama and badmouthing/infighting. Just needs to stop. Leave the drama at home.

Seattle – Seattle is great. Not going to Seattle next year since the summit is in Charlotte, is going to be tough. I know where to go in Seattle and I like the area. I am worried out Charlotte.

Reg Dates – as I mentioned in my day 1 review, many people came out a day early since the dates said 6th-9th. Same thing next year. Should really say 7th-9th.

Hash Tags – on twitter, usually the hash tag is #sqlpass .. this year they said use #summit12 , some people were using #summit2012 and confused. Also, using #summit12 wasn’t looked at by as many people which stinks as I used that on all my tweets. Next year they need to just keep it as one hash tag.

Karaoke – I have been to the unsanctioned one. It was great. Not sure sanctioning karaoke like this year makes sense. It loses some of what made it cool to begin with. I could get into a lot of detail here but I hope people understand what I mean… taking something “underground” and trying to make it mainstream, usually doesn’t work as well.

#sqlfamily – this is something that I have many thoughts on. I will say things but I don’t think many want to hear it. “sqlfamily” isn’t as big of a family as those in the echo chamber think it is. I would say 99% come to the summit and have no real idea of what it even means. 1% that tweet, present, schmooze think everyone else feels and interacts the same way they do, and it just isn’t true. I met many people at breakfast/lunch and after hours that in fact have no real want/need to be totally ingrained with the clique. Many don’t even use twitter, etc. They are just going to work, doing their job, trying to learn. etc. I think it would make sense fo the sql/sql pass community to step back and think about that for a while.

This year I wrote a blog post for the SQL Server Blog before the summit to drive excitement, which was cool. The first day at the keynote a guy sat next to me and we were talking before it started. He was like “dude, I read your blog on the sql server blog!” – To me that was so cool. He said I was a “rockstar”. No, I am not a rockstar (or an MVP – but the blog says I am, maybe the emails have been going to my spam folder all these years) – I am just a regular tech guy that is passionate about technology, SQL, BI (and a ton more). I was really happy though to see that people are reading that content and it is firing them up, it is what my intention was. And if you read that post, I took back a ton of good stuff from the summit. I am already starting to formalize and get strategy/implementation plans going for things I directly learned.

So to close, my third summit was great. Great content, meeting new people and seeing old faces and having lively discussions and knowledge sharing during the day and over a beer. I am going to miss Seattle next year but I can’t wait for the next summit, and possibly even the new SQL BA (Business Analytics) conference in April 2013. I hope everyone who went to the Summit this year enjoyed it and learned as much as I did!

Day 2 Review #sqlpass #summit12

To follow up on my first post about day one of this years PASS Summit, here is how day two played out

The “keynote” here was some PASS discussions, then Quentin Clark (MSFT exec) and Julie Strauss (wicked smart) doing an end to end demo on many things.. Hadoop, Azure, Data Explorer, Power View, Excel, etc. The blogger table was pretty annoying with their tweets during the demo calling it out as boring and not what DBA’s want, failing to remember that half the conference is BI people. I think the demo was “dry” but they showed many things and tied it together. I saw Julie at TechEd and she knows what she is doing. Of course every year the blogger table is going to say “zoom” on the presentations, which yes, they should be doing, or changing resolution, but to see the bantering back and forth on twitter is just bad overall for the people attending and watching and looking for info. The blogger/twitter table should be relaying information that people at home are clamoring for, not bad mouthing the presentation/presenters.

I hit up 4 sessions in all on Thursday Nov 8th..

1. BID-307-M: Using Power View with Multidimensional Models

As with day one, I mentioned I try to get to presentations by Microsoft employees, today was no different. The first one being with Bob Meyers and Sivakumar Harinath. This was a deep dive into the newly announced functionality yet to be released or given a date that will let us hit OLAP cubes with Power View. Honestly I wish Microsoft would have released this from the get go. One thing I don’t understand though is why Power View uses DAX to hit OLAP and TABULAR, while Excel uses MDX to hit OLAP and TABULAR. Seems split brained to me. Choose one and go. Many audience questions in this one, and one downfall of Microsoft Employee presentations is that they have a hard time saying “no” and get into discussions with audience members, many times taking too much time on some specific question.

Presentation was good, and we learned some things. New dimension properties for ImageUrl, Geography (for mapping), etc. And what will and won’t work with Power View and OLAP. Good stuff.

2. BIA-400-HD: Enterprise Data Mining with SQL Server

This was a double session, and I just stayed for the first half. Mark Tabladillo (marktab) is a PhD so that tells you something. Data Mining in SSAS/SQL Server has always been an enigma since day one. I don’t know of many using it in real life (besides the AdventureWorks Demo?) – it is kind of SSAS Cube Writeback, awesome, but not widely used. He showed how you can use the SSAS Data Mining cubes and Excel Add in to do forecasting, basket analysis and how to get into some of the options and get data out yourself to make your own visualizations, pretty cool stuff, but like I said, I left half way through…

3. BIA-309-M: Enriching Your BI Semantic Tabular Models with DAX

I left the Data Mining session early to get a good seat for this presentation. Kasper de Jonge from Microsoft is one I always try to get to as he is wicked smart as well, and usually the presentations are awesome, this one was no different. Getting into the details with DAX and just seeing someone like Kasper use PowerPivot, Excel .. it shows how “he” would use it, being a program manager, which is different than most. Great to pick up tips/tricks and just see how he goes about doing even the basics. He even showed off the trick on changing the DAX on an imported table to a DAX query to get whatever you want back from your tabular cube, he has a blog post that I went through a while ago to the same effect, which was cool.

4. BIA-206-M: BI Power Hour

Finally to end the day..Matt Masson and Matthew Roche again, with Patrick LeBlanc, Peter Myers, Sean Boon and Chuck Heinzelman.

This presentation reminded me of a Brian Knight spectacular.. throwing trinkets, books, etc to audience, goofy stuff. Pretty funny, and they go through SharePoint, SSIS, PowerView etc. Very lighthearted and a good way to end a 2nd day on non-stop technical things. Matt Masson is probably a stand up comedian at night, just funny stuff. I have seen Chuck present before and he is good, Sean showed us some PowerPivot with Olympic data and Shark bite data, Patrick with a Windows Phone app and Azure and SQL Data Sync, Matt with SSIS data app, and Peter Myers filled in at the end by capturing data from the audience over mobile and slicing/dicing it. I have seen Peter before as well and he is very methodical, it was his first “power hour” and it showed, but hopefully he does it again and is a bit more prepared.

Thursday night was the appreciation night, and gather at the EMP (music museum) in Seattle. They shuttle you over and back. Two free drinks, food (I think I had mac and cheese 3 nights in a row for some reason last week), and you can tour around the museum. There was #SQLKaraoke, but the sanctioned one, not the one at Busch Gardens. Live band and you get to sing, pretty cool stage and everything. Again, bummer, my voice was out or I would have sang a tune.

So to wrap up my 2nd full day, BI, BI, BI all day. More to come with the last day and overall thoughts for this year.

Day 1 Review #sqlpass #summit12

This SQL PASS Summit was my third, and it was good. Kind of crazy timing as we just had a baby 2+ weeks ago, so I am very lucky I got to go.

Day one was Wednesday Nov 7th. There is a kickoff thing the night before which is always good to see everyone again, etc. There are pre-cons two days before (5th, 6th). Myself, as with many I talked to, came out the 5th, thinking the conference started the 6th, which we were mistaken, so it was kind of a free day, but still things going on. The website said 6th-9th so we all assumed without digging into the detail. At least I wasn’t the only one.

The first day keynote was good, Ted Kummert from Microsoft which I have seen a few times now, and the same cast of characters, Amir Netz showing off more Power View and Movie data. The big things announced that made me perk up were SQL Server 2012 SP1 and Power View over OLAP (coming soon?). No big flashy giveaways like BUILD, but good keynote, then the fun starts.

I attended 4 sessions on Wednesday

1. BIA-303: What’s New in Analysis Services 2012? – Chris Webb

This was my first session of the day, and it was in 305-TCC. TCC was across the street, which maybe was like that years past, but I never had to go to any, so everyone seemed lost. We finally got there, but then Chris Webb told us that the abstract was wrong in some places and the talk would mostly be about tabular, not multidimensional. Oh well, good stuff anyways. There was one slide about OLAP stuff. The biggest thing I got out of this was xEvents for SSAS, and how to pull into PowerPivot. This is the first time I have seen Chris Webb present and it was good.

2. BIA-316-M: Enterprise Information Management: Bringing Together SSIS, DQS, and MDS

For the second session, it was two Microsoft employees. I like to try to hit many sessions by Microsoft Employees because well, they usually have worked on the products, and they get into details, and they sometimes let some juicy details slip.

Matt Masson and Matthew Roche are great presenters, funny and play off each other. They showed and telled SQL Server 2012 MDS and DQS and discussed how it could and should be used in orgs. Master Data is a huge issue in many businesses and the Microsoft solution looks really good. Using DQS along with SSIS to clean your data, or as a very smart “spell checker”, and then MDS to track changes, workflow, and send back data to source systems if you’d like. The big thing here I took out was how they see MDS fitting into businesses, and that a BI team should implement MDS/DQS to make sure their dimensional data is clean and the “golden master” they need for great BI reporting, and updating back to source systems is a secondary thing.

3. BID-212-S: Around the World with SharePoint BI Toolbelt

This was a typical Brian Knight session. Not as huge of a production as some of them I have seen. Just him and his employee/bi architect and a helper/demo person.

They showed quickly how to get SharePoint setup for Excel Services and Power View and then did some demos. Overall good stuff but seemed a bit rushed and some things didn’t work. They demo’d PerformancePoint, which who knows what future that has, but seems like the best tool for OLAP scorecards in SharePoint. Performance Point has been an enigma for us to do anything with, not sure we ever will. I always see it demo’d and see the benefits, and see what it can do, but we never get around to doing it. Maybe someday, or maybe it will just get replaced by something..

As I said he brings up a sales person from his team or someone new to show how easy it is for a non-techie to use Power View (or whatever tool they present) and go through a little demo.

4. BID-102: Mobile Business Intelligence for Everyone, Now!

Final presentation of the day was with Jen Stirrup, who also won the PASSion award on Thursday. I also chatted with her briefly Wednesday morning, which was good as I haven’t met her before this summit. The presentation was OK. It was a 100 level, but I wanted to see some Mobile BI. I have some high expectations as I saw Jen Underwood present on Mobile BI at TechEd, so was expecting more of the same. Jen Underwood was actually in the audience and answered some audience questions.

The presentation had some technical glitches, and also dug a little to deep into visualization discussion, which is good, but I wasn’t expecting it in this one, maybe a different session. Jen showed some stuff on her iPad, and talked about how she uses Azure and SSRS in Azure, and also HostedPowerPivot, which was good stuff, but nothing new that I didn’t see at TechEd.

I use MobiSSRS for SSRS reports on iOS and that works great, she didn’t mention it, but Mobile BI presentations can get into the “3rd party app here and there” instead of what you can do out of the box. With mobile BI though, the first question is, “do you run SharePoint?” and the second is, “It is Enterprise?” because that makes a big difference in what you might try to do

Wednesday was a good day, I didn’t do much in the evening besides just grab a bite to eat and hit the hay. Bummer this year was that I started getting a cold on the way out on the plane, and it ate at my voice all week. Nothing to serious but enough to not want to talk in a pub about BI much as you have to yell.

More to come about Day 2 and Day 3, and overall thoughts..

Trek BI Agile Story Board

Business Intelligence – 3 Years of Agile

Last year at the PASS summit, I ran into someone and was discussing project management and Business Intelligence. They were so adamant that agile couldn’t work. And I had to correct them, as I have been doing agile now for three years at Trek in our BI group.

Yes, some things don’t exactly parallel to software development, but many things do. Sprints, Standups, estimations, stories, points, scrum master, releasing/delivering value on your iterations. And now even more with process like unit testing of SQL code and more, things are getting closer to software development in that regard.

I have an entire blog category dedicated to agile – more concepts but also talking about Business Intelligence teams in some of them.

Just remember, you can do BI and Agile, it works, and you can deliver a ton of value to the organization. Someone who might argue with you, well, they don’t know what they are doing in BI or in Agile, or the organization isn’t willing to change, which of course then it wouldn’t work.

Advice. Make sure you hold retrospectives, and make sure you make adjustments from whatever comes out of them.

techedslides

#msteched TechEd 2012 Conference Highlights

Last week I was in Orlando at Microsoft TechEd 2012. It was a great conference, especially for me, since I am all over the place with technology. There was DBA, SysAdmin, Developer, etc sessions.

What did I take away? Here is a quick overview.

Azure. The cloud. It is here (well it already was) but now with IaaS and persistent VMs, the game is changed for Azure. Also with websites and media streaming and other stuff Azure is becoming *the* platform.

SQL 2012 - While I was away, the BI team upgraded to 2012 at work. While I have been playing with 2012 for a while, it just got real. BISM stuff and SharePoint/Kerberos stuff came to light this week. Good stuff

System Center while I didn’t focus much on this, System Center is where it is at. You can now monitor your Azure cloud, your on-prem VM (HyperV or VMWare) and do tons of other stuff. Sky is the limit with this product.

There is so much more but it was a great time. Good people, good sessions, smart presenters, and just information oozing out of every room. Orlando in early June is like a sauna though :)

I am really pumped to get back in the saddle and start implementing some of this stuff in day to day solutions. Was so excited I actually moved my blog to an Azure website this weekend. It moved fine and I had to do some MySQL stuff to get my content moved, but it is basically sitting there idle as I can’t change the DNS over without pay more than I am willing to. More to come in this arena though.

To the cloud! (and the Tabular cube, and the System Center, and so on and so forth)…

ReportBuilder Icon

Dynamic Sorting Using Parameters in SSRS

The other day, someone requested that a report in SSRS be sorted differently by default. While that might make sense if everyone wants it that way, more than likely you might have people that want a report sorted differently by default. How to do it?

There are probably a few ways, but this is what I did.

First, I added two parameters. “SortByDefault” and “SortOrder”

The “SortByDefault” will be a drop down of your columns you want to sort by for your dataset (or group, or table/tablix)

The “SortOrder” is simply Asc (1 to N, A to Z) and Desc (N to 1, Z to A)

Now, here is how mine look:

SortByDefault (I have two columns I want to allow sorting by, PointsLeft and StackRank):

SortOrder:

Now comes the fun stuff: Making it work.

Make sure you remove any “ORDER BY” in your dataset (you don’t have to but this makes it easier).

I also have every column in the report set up for interactive sorting based on the column header/column it shows, but not sure that is necessary here, I just wanted to put that out there just in case

You want to get to your sorting options. So in my case, I have a tablix, so get to your tablix property window and the sorting option:

Now you can see, my “Sort By” and “Then By” are expressions. It is kind of weird here. Also you can’t set expressions for “Asc” or “Desc” so what I had to do was trick it somewhat.

the first is to handle the asc option:

=IIF(Parameters!SortOrder.Value=”Asc”,Fields(Parameters!SortByDefault.Value).Value,0)

the second is to handle the desc option

=IIF(Parameters!SortOrder.Value=”Desc”,Fields(Parameters!SortByDefault.Value).Value,0)

You can see, some magic. If the order by is XYZ then use the field, otherwise 0. If you notice from the screenshot, first one is A to Z (Asc) and the second one is Z to A (Desc). So we are basically telling SSRS to sort by the param or not based on the order by option and it chooses the right order by (ASC/DESC). I think this was easier in SQL 2000 SSRS :)

Well now you should be able to test your report and try to sort orders. What I did next is make my params hidden. The defaults are what I wanted for the existing report (Order By PointsLeft DESC), and what I did next is create a linked report and set the hidden parameters int he report options in Report Manager to (Order By Stack Rack, ASC)

Now I have one report, with hidden sorting params and I can create linked reports with different sort options without having to create a new report. I could add all columns to the choices, or even let users choose as parameters (but they already have interactive sorting in this case).

Happy Report Buildin’!

data quality

SQL Server 2012: Data Quality Services

With the release of SQL Server 2012, I am looking more into Master Data Services (MDS) and Data Quality Services (DQS). A brief overview of DQS.

You install DQS with SQL, and you have to configure it. The server configuration is a cmd line process that runs to create some databases on your server (DQS_MAIN, DQS_PROJECTS, DQS_STAGING_DATA).

I ran into one issue with the running of the configuration, not sure if this happens everywhere, I am running Windows 8, but nonetheless, I ran into. After running the tool and getting error after error, and trying as admin, etc. I dug deeper into the error message and found that I there were some security/permission issues I had to resolve. It ended up being that I had to change permissions on

C:WindowsMicrosoft.NETFramework64v4.0.30319Configmachine.config

to allow write access. Once I did that, the configuration tool worked and I could get into DQS.

DQS gives you a “Data Quality Services Client” to work with. When you open it, connect to the database where you configured the three databases I talk about above. Once you, you have 3 panes.

You have Knowledge Bases, Data Quality Projects, and Administration.

Knowledge Bases: datasets of known data that you can use in your Data Quality Projects. You get a default Knowledge Base – state names and some other data similar to that.

Data Quality Projects: Here is where the magic happens. You can choose some source data (Excel xls – xlsx wouldn’t work or SQL table) and then apply your knowledge base on it. Then you can reimport your data at the end back into SQL or export it, and update your Knowledge Base with learned values.

Administration: Not a ton of options, but you can set some thresholds, and also setup your Azure data market settings.

Azure Data Market https://datamarket.azure.com/browse/Data?Category=dqs – Lots of data you can use to combine with your Knowledge Bases. Much more here and I won’t go into detail – it could be its own post in itself.

As a test, I took an excel file, added a few records with columns first, last, city, state (I actually imported into a staging SQL table to work with it) – But in the state field I put different variations of state, WI, Wis, Wisconsin, MN, Minn, Minn., etc.

I then ran the file through creating a new data quality project and ran it against the default Knowledge Base, and it corrected the values it could. Got a weird error clicking next on the project, it seems the button is touchy. Hopefully they come out with a fix soon.

Once you build up and get your Knowledge Base stable, you can use from SSIS packages or in Master Data Services. I see many useful applications for DQS. Either around your corporate data or pulling in data from Azure data market to cleanse existing data you might have (think: looking up gender from first/last name).

This post is a brief look at DQS and how it works, but there is so much more. I hope to get more in depth in the near future.

powerpivot_2

Yamanalysis: Analyzing Yammer and Using PowerPivot on MySQL

I have blogged before about we use Yammer. Some interesting data can be gleaned from the usage of Yammer. One thing though is that the data and usage stats are limited in the Yammer area, but you can get all the data and take a look at things. I ran into Yamanalysis and decided to give it a try.

After getting Ruby, Rails, MySQL, curl/curb, GraphViz, IBM WordCloud and whatever else configured, I finally got it working. (FYI – MySQL 5.0 – you need to run the config wizard as administrator on Windows 7 or it just hangs at the end).

Pretty cool data and analysis from a higher level. Of course after getting everything working, I wanted to hit the data with PowerPivot. This sounds like an easy feat, but yet seemed to be a complicated task.

I first got the ODBC connector 5.1 for MySQL (Since PowerPivot doesn’t natively connect to MySQL,and 5.1 since that is the only one I could find reliably and get to work.), and set up an ODBC source. Tests fine.

In PowerPivot, I would run through the wizard and it would get architecture mismatches, and catastrophic failures, trying to test the connection. Ignoring that and moving forward, running a query would just hang on import forever. I tried different DSN’s, User/System DSNs, etc, to no avail.

What I ended up doing was firing up my local Microsoft SQL instance, and creating a linked server through a system DSN to the MySQL instance, then I could query the data fine from SQL. I opened up PowerPivot, connected to SQL local and then ran the query to MySQL and it work. What a workaround, what a hack, but at least I can hit the data in PowerPivot locally, which was my goal here.

Of course I could take what Yamanalysis is doing and dump to SQL, or do something similar in C# and dump to SQL, that might be a project for another day.