This page provides you with instructions on how to extract data from Microsoft SQL Server and load it into Google BigQuery. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Microsoft SQL Server?
Microsoft SQL Server is a relational database management system designed by Microsoft. Storing an retrieving data for applications both on a single machine and across the web (or any network) is the primary function of MSSQL. Folks use Microsoft SQL Server because is supports .NET framework out of the box, and integrates nicely into the Microsoft ecosystem.
What is Google BigQuery?
Google BigQuery is a data warehouse that delivers super-fast results from SQL queries, which it accomplishes using a powerful engine dubbed Dremel. With BigQuery, there's no spinning up (and down) clusters of machines as you work with your data. With all of that said, it's clear why some claim that BigQuery prioritizes querying over administration. It's super fast, and that's the reason why most folks use it.
Getting data out of SQL Server
Depending on your particular needs and abilities, there are a few methods of getting data out of MSSQL.
The default for most folks who work with databases will be to use queries for extraction. The main benefit here is the flexibility available through putting together SELECT statements. You can filter, sort, and limit to your hearts content. This can be especially useful if you care only about retrieving a certain subset of data from your MSSQL database.
If you need to export data in bulk, you can use Microsoft SQL Server Management Studio which enables you to export entire tables and databases in formats like text, CSV, or SQL queries meant to restore the database if run.
Loading data into Google BigQuery
Google Cloud Platform offers a helpful guide for loading data into BigQuery. You can use the
bq command-line tool to upload the files to your awaiting datasets, adding the correct schema and data type information along the way. The
bq load command is your friend here. You can find the syntax in the bq command-line tool quickstart guide. Iterate through this process as many times as it takes to load all of your tables into BigQuery.
Keeping SQL Server data up to date
All set! You've written a script to move data from MSSQL into your data warehouse. What happens when there is new data in your MSSQL server that needs to be added to your data warehouse? Data freshness is one of the most important aspects of any analysis.
You could load the entire MSSQL database again. This is almost guaranteed to be slow and painful, causing all kinds of latency.
The key here is to build your script to recognize new and updated records in the source database. Using an auto-incrementing field as a primary key is a great way to accomplish this. The key functions something like a bookmark, so your script can resume where it left off. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in MSSQL.
Other data warehouse options
BigQuery is really great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Postgres or Redshift, which are two RDBMSes that use similar SQL syntax. If you're interested in seeing the relevant steps for loading this data into Postgres or Redshift, check out To Redshift and To Postgres.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Microsoft SQL Server data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Google BigQuery data warehouse.