Upcoming Events: Open FSharp

Key Details

What: Open FSharp 2019
Where: Broadway Studios, 435 Broadway, San Francisco, CA 94133
When: Thursday, September 26th and Friday, September 27th
Admission is $375 if you’re paying out of pocket or $475 for companies paying for tickets. Register on the Open F# website.

What I’m Presenting

02:00 PM — 02:50 PM — A Map to Success

I present on Thursday the 26th, and will aim my talk at database developers interested in learning some of the cool things you can do with F#, showing that it’s a great language for data platform professionals.

Advertisements

Upcoming Events: SQL Saturday Boston

Key Details

What: SQL Saturday Boston
Where: Microsoft, 5 Wayside Road, Burlington, Massachusetts, 01803, United States
When: Saturday, September 14th.
Admission is free. Register on the SQL Saturday website.

What I’m Presenting

02:45 PM — 03:45 PM — Approaching Zero: Minimizing Downtime During Deployments

That 2:45 PM time slot is a murderer’s row; there are seven strong talks in there.

PolyBase: When You Have No Local Tables

Today’s PolyBase post is all about what happens when you want to join data from multiple data sources together, but none of your tables are local SQL Server tables.

No Local Tables were Harmed in the Making of this Post

Let’s suppose we have two sets of data in two different sources. My first is North Carolina population living in Azure Blob Storage. Here is the script to create this table.

CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'cspolybaseblob',
SECRET = '<It's a secret&gt;';
GO
CREATE EXTERNAL DATA SOURCE AzureNCPopBlob WITH
(
    TYPE = HADOOP,
    LOCATION = 'wasbs://ncpop@cspolybaseblob.blob.core.windows.net',
    CREDENTIAL = AzureStorageCredential
);
GO
CREATE EXTERNAL FILE FORMAT CsvFileFormat WITH
(
    FORMAT_TYPE = DELIMITEDTEXT,
    FORMAT_OPTIONS
    (
        FIELD_TERMINATOR = N',',
        USE_TYPE_DEFAULT = True,
        STRING_DELIMITER = '"',
        ENCODING = 'UTF8'
    )
);
GO
CREATE EXTERNAL TABLE dbo.NorthCarolinaPopulation
(
    SumLev INT NOT NULL,
    County INT NOT NULL,
    Place INT NOT NULL,
    IsPrimaryGeography BIT NOT NULL,
    [Name] VARCHAR(120) NOT NULL,
    PopulationType VARCHAR(20) NOT NULL,
    Year INT NOT NULL,
    Population INT NOT NULL
)
WITH
(
    LOCATION = N'Census/NorthCarolinaPopulation.csv',
    DATA_SOURCE = AzureNCPopBlob,
    FILE_FORMAT = CsvFileFormat,
    REJECT_TYPE = VALUE,
    REJECT_VALUE = 5
);
GO

With this combination of external data source, file format, and table, we can create our next table. This is a data set which lives in Spark and contains…well, the same data. What we join together isn’t that interesting, after all.

CREATE DATABASE SCOPED CREDENTIAL SparkCredential
WITH IDENTITY = '<Your Username&gt;', Secret = '<Your Password&gt;';
GO
CREATE EXTERNAL DATA SOURCE ClusterinoSpark WITH
(
    LOCATION = 'odbc://clusterino:10016',
    CONNECTION_OPTIONS = 'Driver={Hortonworks Spark ODBC Driver}; Host = clusterino; Port = 10016; Database = default; ServerNode = clusterino:10016',
    CREDENTIAL = SparkCredential,
    PUSHDOWN = ON
);
GO
CREATE EXTERNAL TABLE dbo.NorthCarolinaPopulationTypedSpark
(
    SummaryLevel INT,
    CountyID INT,
    PlaceID INT,
    IsPrimaryGeography BIT,
    Name NVARCHAR(255),
    PopulationType NVARCHAR(255),
    Year INT,
    Population INT
)
WITH
(
    LOCATION = 'NorthCarolinaPopulationTyped',
    DATA_SOURCE = ClusterinoSpark
);

Once we have these two tables, we can join them.

SELECT
	*
FROM dbo.NorthCarolinaPopulation ncp
	INNER JOIN dbo.NorthCarolinaPopulationTypedSpark ncps
		ON ncp.SumLev = ncps.SummaryLevel
		AND ncp.County = ncps.CountyID
		AND ncp.Place = ncps.PlaceID
		AND ncp.PopulationType = ncps.PopulationType
		AND ncp.Year = ncps.Year;

You can extend this out to as many tables as you’d like. For each table, the PolyBase engine will determine whether to pull the data into a temp table locally or stream the data back, as well as whether to push down the operation to the remote source (if possible) or pull all of the data back. This is true for all PolyBase data sources, not just the two I showed here.

Category Theory: the Video Lectures

I just wrapped up watching Bartosz Milewski’s YouTube series on category theory. If you are at all interested in functional programming, this is an outstanding set of lectures.

The course covers category theory, a rather abstract branch of mathematics. Throughout the twenty videos, Milewski takes us through the landscape of category theory, grounding it as much as possible in the language and concepts of programming. Some of Milewski’s examples are in Haskell but you don’t need to know that language to understand what’s going on. Similarly, other examples are in C++, but it’s clear from the context what he means even if you’ve never seen a line of the language.

I came into this course with a fair knowledge of functional programming and a less-than-fair knowledge of category theory. Watching these videos has given me a much better understanding of the topic and has really cleared up some of the trickier concepts in functional programming like monads.

For further reading, Milewski has a blog full of posts on the topic and a group has (with permission) turned his posts into a book. There are also two more courses on category theory that Milewski has put together, helping us dig even further into the topic.

If you are interested in the series, don’t get too distracted by the intentionally-opaque examples. I’ve seen this in undergraduate courses I took on logic, where the professor wants to ensure that you don’t get stuck thinking about a specific example when explaining a general concept, as though the specific example were the only case. Milewski does a good job of combining the highly-general drawings of categories and how elements in categories can map to other categories via functors but then brings it down to examples we’re more familiar using, particularly with sets. There’s a balancing act involved in these examples and I think Milewski has that act pretty well covered.

It may take you a month or three to get through all of these videos, but I definitely recommend them. From here, I’m going to work through the book and fill in more gaps.

Upcoming Events: 24 Hours of PASS Summit Preview

Key Details

What: 24 Hours of PASS: Summit Preview 2019
Where: Online-Only
When: Tuesday, September 10th and Wednesday, September 11th.
Registration is free.

What I’m Presenting

08:00 PM — 09:00 PM (Eastern Daylight Time) — Architecting a Big Data Analytics Solution with SQL Server 2019

In this session, I am going to give a pre-con preview. If seeing this makes you want to get in on my full-day training, you can sign up at the PASS Summit website.

PolyBase + Dockerized Hadoop

Quite some time ago, I posted about PolyBase and the Hortonworks Data Platform 2.5 (and later) sandbox.

The summary of the problem is that data nodes in HDP 2.5 and later are on a Docker private network. For most cases, this works fine, but PolyBase expects publicly accessible data nodes by default—one of its performance enhancements with Hadoop was to have PolyBase scale-out group members interact directly with the Hadoop data nodes rather than having everything go through the NameNode and PolyBase control node.

Thanks to a comment by Christopher Conrad in that post, I learned how to solve this problem. I’ll focus on versions of HDP after 2.6.5. Once the new Cloudera gets its sandbox out, I’ll eventually get to checking that out. In the meantime, you can still grab the sandbox edition of the Hortonworks Data Platform distribution of Hadoop.

Update SQL Server Configuration

The first thing we need to do is change SQL Server’s hdfs-site.xml file. You can find it in %PROGRAMFILES%\MSSQL[##].MSSQLSERVER\MSSQL\Binn\Polybase\Hadoop\conf on a default installation of SQL Server, where [##] is the version number and MSSQLSERVER is your instance name.

Inside hdfs-site.xml, add the following property:

<property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>true</value>
</property>

Now PolyBase will use hostnames rather than IP addresses. This will avoid the problem where it tries to connect to a 172.17.0.* IP address and fails because that subnet is not routable.

Update Hadoop Configuration

On the HDP sandbox, we need to open up some ports. To do so, ssh into your root node (by default, the username is root and the password hadoop) and run the following commands to modify the proxy deployment script:

cd /sandbox/deploy-scripts/
cp /sandbox-flavor .
vi assets/generate-proxy-deploy-script.sh

You don’t actually have to use vi here, though that’s my editor of choice. Scroll down to the bottom of the tcpPortsHDP list and you’ll want to add entries for three ports: 8050, 50010, and 10020. Save this file and then run the following shell commands to generate a script and replace your proxy deployment file with the newly-generated version:

./generate-proxy-deploy-script.sh
cd /sandbox
mv proxy/proxy-deploy.sh proxy/proxy-deploy.sh.old
cp deploy-scripts/sandbox/proxy/proxy-deploy.sh proxy/

Restarting Hadoop

Now that we have a script in place, we need to stop all of the data nodes and restart the cluster. First, run ./sandbox-stop.sh to stop the sandbox. Then, run docker ps to see if there are any data nodes still running. If so, go ahead and kill them with docker kill (node ID). Once everything is dead as a doornail, run ./proxy/proxy-deploy.sh to build a new image with all of the ports we need open. After it’s done, run docker ps and look for an entry which looks something like 0.0.0.0:50010->50010/tcp. If you see that, you’ve completed the mission successfully. Restart Linux on the sandbox and after everything boots up, you should be able to use your Hortonworks sandbox with PolyBase just like any other HDP cluster.

Updates to Time Series Analysis in SQL Server

A couple months ago, I blogged about how my DataCamp course entitled Time Series Analysis in SQL Server soft launched. Since then, I’ve been hard at work, squashing bugs and making instructions and hints clearer and getting this course from soft launch to hard launch.

I have a new batch of updates coming out soon, so I wanted to walk you through some of the process. First, the average rating over the past eight weeks. The course launched about 10 weeks ago, so I’m stripping out that early period of first-movers who love everything and give it a 5-star rating.

It’s like watching your investment portfolio change from day to day.

The first batch of updates hit on July 16th, when I had enough data to learn where some of the problems were and fix them. That bumped the score up a bit, where it has mostly leveled off in the 4.4-4.45 range. Since then, I’ve had a few minor changes but nothing as big as the June 16th release or the upcoming release. My hope is that this next set of changes bumps me into the 4.5-4.6 range.

To figure out what to change, I get a helpful, sortable table with information on how people are doing. Here it is sorted by percentage of people who gave up and asked for the solution:

Psst, what’s the answer to question 8?

When 80% of the people are unable to answer your question, there’s a problem… Drilling in, we get to see the error messages learners get when trying out this problem:

I don’t think I correctly specified the 10 parameters, no.

From there, I can view the actual submitted code and see what my solution looks like versus the submitted code. Here’s the top-ranked example (with a bonus that you get the solution. If you found this blog post and remember it when going through my course, I think that’s worth one free solution):

Where it all went wrong.

With this information, I can modify the course. In this particular exercise, I decided to do two things: first, I provided the number of seconds and milliseconds to each DATETIMEOFFSETFROMPARTS() call. That was a common mistake where people filled in 8.0 for seconds.milliseconds rather than 7.999. Second, in the hint, I provide the exact date in a format which just happens to match the order of inputs. My hope is that these two things are enough to raise the success rate. For this example above, people put in 2039 as the year instead of 2038. Reading the improved hint, they’d see that the year is definitely 2038.

I ended up making these changes on about 15-20 exercises and will keep making changes to make the course successful. If you are going through the course and have direct feedback, hit the (!) button at the top-right of the screen on any exercise and leave your thoughts.

If you want to feel old, there are people old enough legally to drink in the United States who had not been born yet when this episode of the Simpsons first aired.

If you want to try this course out and learn all about working with times, dates, and time-related aggregations in SQL Server, get to the course on DataCamp. The first chapter is free, though subsequent chapters come at a cost. Just money, though, not souls or firstborn children.