Upcoming Speaking Engagements

It’s been a busy few months, so I’m going to keep radio silent for a little while longer (though Curated SQL is going strong).  In the meantime, here’s where I’m going to be over the next month:

  1. On Saturday, May 19th, I’ll be in New York City for SQL Saturday NYC, where I’m going to do two presentations:  Using Kafka for Real-Time Data Ingestion with .NET and Much Ado About Hadoop.
  2. On Tuesday, May 22nd, I’ll be in Charlotte presenting for the Enterprise Developers Guild, giving a talk entitled A .NET Developer’s View of the OWASP Top 10.
  3. On Thursday, May 31st, I’m going to give a paid pre-con entitled Enter the Tidyverse for SQL Saturday Mexico.  Click the link for instructions on signing up.
  4. Then, on Saturday, June 2nd, I’ll be in Mexico City for SQL Saturday Mexico, where I will present two talks:  Data Cleansing with SQL and R and Working Effectively with Legacy SQL.
  5. On Thursday, June 7th, I will present my R for the SQL Server Developer talk to the Roanoke Valley .NET User Group.
  6. Then, on June 9th, I’m going to give 3 sessions at SQL Saturday South Florida.  Those talks are Eyes on the Prize, Much Ado About Hadoop, and APPLY Yourself.
  7. I’ll be in Houston, Texas on June 23rd for SQL Saturday Houston.  There’s no official confirmation on talk(s) just yet, but I can confirm that I’ll be there and will do at least one session.

I have a few more irons in the fire as well, but this wraps up my May and June.

Advertisements

Enter The Tidyverse, Columbus Edition

In conjunction with SQL Saturday Columbus, I am giving a full-day training session entitled Enter the Tidyverse:  R for the Data Professional on Friday, July 27th.  This is a training that I did earlier in the year in Madison, Wisconsin, and aside from having no voice at the end, I think it went really well.  I’ve tweaked a couple of things to make this training even better; it’s well worth the low, low price of $100 for a full day of training on the R programming language.

I use the term “data professional” on purpose:  part of what I do with this session is show attendees how, even if they are database administrators, it can pay to know a bit about the R programming language.  Database developers, application developers, and budding data scientists will also pick up a good bit of useful information during this training, so it’s fun for the whole data platform.

Throughout the day, we will use a number of data sources which should be familiar to database administrators:  wait stats, database backup times, Reporting Services execution log metrics, CPU utilization statistics, and plenty more.  These are the types of things which database administrators need to deal with on a daily basis, and I’ll show you how you can use R to make your life easier.

If you sign up for the training in Columbus, the cost is only $100 and you’ll walk away with a better knowledge of how you can level up your database skills with the help of a language specially designed for analysis.  Below is the full abstract for my training session.  If this sounds interesting to you, sign up today!  I’m not saying you should go out and buy a couple dozen tickets today, but you should probably buy one dozen today and maybe a dozen more tomorrow; pace yourself, that’s all I’m saying.

Course Description

In this day-long training, you will learn about R, the premiere language for data analysis.  We will approach the language from the standpoint of data professionals:  database developers, database administrators, and data scientists.  We will see how data professionals can translate existing skills with SQL to get started with R.  We will also dive into the tidyverse, an opinionated set of libraries which has modernized R development.  We will see how to use libraries such as dplyr, tidyr, and purrr to write powerful, set-based code.  In addition, we will use ggplot2 to create production-quality data visualizations.

Over the course of the day, we will look at several problem domains.  For database administrators, areas of note will include visualizing SQL Server data, predicting error occurrences, and estimating backup times for new databases.  We will also look at areas of general interest, including analysis of open source data sets.

No experience with R is necessary.  The only requirements are a laptop and an interest in leveling up your data professional skillset.

Intended Audience

  • Database developers looking to tame unruly data
  • Database administrators with an interest in visualizing SQL Server metrics
  • Data analysts and budding data scientists looking for an overview of the R landscape
  • Business intelligence professionals needing a powerful language to cleanse and analyze data efficiently

Contents

Module 0 — Prep Work

  • Review data sources we will cover during the training
  • Ensure laptops are ready to go

Module 1 — Basics of R

  • What is R?
  • Basic mechanics of R
  • Embracing functional programming in R
  • Connecting to SQL Server with R
  • Identifying missing values, outliers, and obvious errors

Module 2 — Intro To The Tidyverse

  • What is the Tidyverse?
  • Tidyverse principles
  • Tidyverse basics:  dplyr, tidyr, readr, tibble

Module 3 — Dive Into The Tidyverse

  • Data loading:  rvest, httr, readxl, jsonlite, xml2
  • Data wrangling:  stringr, lubridate, forcats, broom
  • Functional programming:  purrr

Module 4 — Plotting

  • Data visualization principles
  • Chartjunk
  • Types of plots:  good, bad, and ugly
  • Plotting data with ggplot2
    • Exploratory plotting
    • Building professional quality plots

Module 5 — Putting it Together:  Analyzing and Predicting Backup Performance

  • A capstone notebook which covers many of the topics we covered today, focusing on Database Administration use cases
  • Use cases include:
    • Gathering CPU statistics
    • Analyzing Disk Utilization
    • Analyzing Wait Stats
    • Investigating Expensive Reports
    • Analyzing Temp Table Creation Stats
    • Analyzing Backup Times

Course Objectives

Upon completion of this course, attendees will be able to:

  • Perform basic data analysis with the R programming language
  • Take advantage of R functions and libraries to clean up dirty data
  • Build a notebook using Jupyter Notebooks
  • Create data visualizations with ggplot2

Pre-Requisites

No experience with R is necessary, though it would be helpful.  Please bring a laptop to follow along with exercises and get the most out of this course.

What Comes After Go-Live?

This is part eight of a series on launching a data science project.

At this point in the data science process, we’ve launched a product into production.  Now it’s time to kick back and hibernate for two months, right?  Yeah, about that…

Just because you’ve got your project in production doesn’t mean you’re done.  First of all, it’s important to keep checking the efficacy of your models.  Shift happens, where a model might have been good at one point in time but becomes progressively worse as circumstances change.  Some models are fairly stable, where they can last for years without significant modification; others have unstable underlying trends, to the point that you might need to retrain such a model continuously.  You might also find out that your training and testing data was not truly indicative of real-world data, especially that the real world is a lot messier than what you trained against.

The best way to guard against unbeknownst model shift is to take new production data and retrain the model.  This works best if you can keep track of your model’s predictions versus actual outcomes; that way, you can tell the actual efficacy of the model, figuring out how frequently and by how much your model was wrong.

Depending upon your choice of algorithm, you might be able to update the existing model with this new information in real time.  Models like neural networks and online passive-aggressive algorithms allow for continuous training, and when you’ve created a process which automatically feeds learned data back into your continuously-training model, you now have true machine learning. Other algorithms, however, require you to retrain from scratch.  That’s not a show-stopper by any means, particularly if your underlying trends are fairly stable.

Regardless of model selection, efficacy, and whether you get to call what you’ve done machine learning, you will want to confer with your stakeholders and ensure that your model actually fits their needs; as I mentioned before, you can have the world’s best regression, but if the people with the sacks of cash want a recommendation engine, you’re not getting the goods.  But that doesn’t mean you should try to solve all the problems all at once; instead, you want to start with a Minimum Viable Product (MVP) and gauge interest.  You’ve developed a model which solves the single most pressing need, and from there, you can make incremental improvements.  This could include relaxing some of the assumptions you made during initial model development, making more accurate predictions, improving the speed of your service, adding new functionality, or even using this as an intermediate engine to derive some other result.

Using our data platform survey results, assuming the key business personnel were fine with the core idea, some of the specific things we could do to improve our product would be:

  • Make the model more accurate.  Our MAE was about $19-20K, and reducing that error makes our model more useful for others.  One way to do this would be to survey more people.  What we have is a nice starting point, but there are too many gaps to go much deeper than a national level.
  • Introduce intra-regional cost of living.  We all know that $100K in Manhattan, NY and $100K in Manhattan, KS are quite different.  We would want to take into account cost of living, assuming we have enough data points to do this.
  • Use this as part of a product helping employers find the market rate for a new data professional, where we’d ask questions about the job location, relative skill levels, etc. and gin up a reasonable market estimate.

There are plenty of other things we could do over time to add value to our model, but I think that’s a nice stopping point.

What’s Old Is New Again

Once we get to this phase, the iterative nature of this process becomes clear.

The Team Data Science Project Lifecycle (Source)

On the micro level, we bounce around within and between steps in the process.  On the macro level, we iterate through this process over and over again as we develop and refine our models.  There’s a definite end game (mostly when the sacks of cash empty), but how long that takes and how many times you cycle through the process will depend upon how accurate and how useful your models are.

In wrapping up this series, if you want to learn more, check out my Links and Further Information on the topic.

Deploying A Model: The Microservice Approach

This is part seven of a series on launching a data science project.

Up to this point, we’ve worked out a model which answers important business questions.  Now our job is to get that model someplace where people can make good use of it.  That’s what today’s post is all about:  deploying a functional model.

Back in the day (by which I mean, say, a decade ago), one team would build a solution using an analytics language like R, SAS, Matlab, or whatever, but you’d almost never take that solution directly to production.  These were analytical Domain-Specific Languages with a set of assumptions that could work well for a single practitioner but wouldn’t scale to a broad solution.  For example, R had historically made use of a single CPU core and was full of memory leaks.  Those didn’t bother analysts too much because desktops tended to be single-core and you could always reboot the machine or restart R.  But that doesn’t work so well for a server—you need something more robust.

So instead of using the analytics DSL directly in production, you’d use it indirectly.  You’d use R (or SAS or whatever) to figure out the right algorithm and determine weights and construction and toss those values over the wall to an implementation team, which would rewrite your model in some other language like C.  The implementation team didn’t need to understand all of the intricacies of the problem, but did need to have enough practical statistics knowledge to understand what the researchers meant and translate their code to fast, efficient C (or C++ or Java or whatever).  In this post, we’ll look at a few changes that have led to a shift in deployment strategy, and then cover what this shift means for practitioners.

Production-Quality Languages

The first shift is the improvement in languages.  There are good libraries for Java, C#, and other “production” languages, so that’s a positive.  But that’s not one of the two positives I want to focus on today.  The first positive is the general improvement in analytical DSLs like R.  We’ve gone from R being not so great when running a business to being production-quality (although not without its foibles) over the past several years.  Revolution Analytics (now owned by Microsoft) played a nice-sized role in that, focusing on building a stable, production-ready environment with multi-core support.  The same goes for RStudio, another organization which has focused on making R more useful in the enterprise.

The other big positive is the introduction of Python as a key language for data science.  With libraries like NumPy, scikit-learn, and Pandas, you can build quality models.  And with Cython, a data scientist can compile those models down to C to make them much faster.  I think the general acceptance of Python in this space has helped spur on developers around other languages (whether open-source like R or closed-source commercial languages like SAS) to get better.

The Era Of The Microservice

The other big shift is a shift away from single, large services which try to solve all of the problems.  Instead, we’ve entered the era of the microservice:  a small service dedicated to providing a single answer to a single problem.  A microservice architecture lets us build smaller applications geared toward solving the domain problem rather than trying to solve the integration problem.  Although you can definitely configure other forms of interoperation, most microservices typically are exposed via web calls and that’s the scenario I’ll discuss today.  The biggest benefit to setting up a microservice this way is that I can write my service in R, you can call it from your Python service, and then some .NET service could call yours, and nobody cares about the particular languages used because they all speak over a common, known protocol.

One concern here is that you don’t want to waste your analysts time learning how to build web services, and that’s where data science workbenches and deployment tools like DeployR come into play.  These make it easier to deploy scalable predictive services, allowing practitioners to build their R scripts, push them to a service, and let that service host the models and turn function calls into API calls automatically.

But if you already have application development skills on your team, you can make use of other patterns.  Let me give two examples of patterns that my team has used to solve specific problems.

Machine Learning Services

The first pattern involves using SQL Server Machine Learning Services as the core engine.  We built a C# Web API which calls ML Services, passing in details on what we want to do (e.g., generate predictions for a specific set of inputs given an already-existing model).  A SQL Server stored procedure accepts the inputs and calls ML Services, which farms out the request to a service which understands how to execute R code.  The service returns results, which we interpret as a SQL Server result set, and we can pass that result set back up to C#, creating a return object for our users.

In this case, SQL Server is doing a lot of the heavy lifting, and that works well for a team with significant SQL Server experience.  This also works well if the input data lives on the same SQL Server instance, reducing data transit time.

APIs Everywhere

The second pattern that I’ll cover is a bit more complex.  We start once again with a C# Web API service.  On the opposite end, we’re using Keras in Python to make predictions against trained neural network models.  To link the two together, we have a couple more layers:  first, a Flask API (and Gunicorn as the production implementation).  Then, we stand nginx in front of it to handle load balancing.  The C# API makes requests to nginx, which feeds the request to Gunicorn, which runs the Keras code, returning results back up the chain.

So why have the C# service if we’ve already got nginx running?  That way I can cache prediction results (under the assumption that those results aren’t likely to change much given the same inputs) and integrate easily with the C#-heavy codebase in our environment.

Notebooks

If you don’t need to run something as part of an automated system, another deployment option is to use notebooks like JupyterZeppelin, or knitr.  These notebooks tend to work with a variety of languages and offer you the ability to integrate formatted text (often through Markdown), code, and images in the same document.  This makes them great for pedagogical purposes and for reviewing your work six months later, when you’ve forgotten all about it.

Using a Jupyter notebook to review Benford’s Law.

Interactive Visualization Products

Another good way of getting your data into users’ hands is Shiny, a package which lets you use Javascript libraries like D3 to visualize your data.  Again, this is not the type of technology you’d use to integrate with other services, but if you have information that you want to share directly with end users, it’s a great choice.

Conclusion

Over the course of this post, I’ve looked at a few different ways of getting model results and data into the hands of end users, whether via other services (like using the microservice deployment model) or directly (using notebooks or interactive applications).  For most scenarios, I think that we’re beyond the days of needing to have an implementation team rewrite models for production, and whether you’re using R or Python, there are good direct-to-production routes available.

How Much Can We Earn? Implementing A Model

This is part six of a series on launching a data science project.

Last time around, we walked through the idea of what building a model entails.  We built a clean(er) data set and did some analysis earlier, and in this post, I’m going to build on that.

Modeling

Because our question is a “how much?” question, we want to use regression to solve the problem. The most common form of regression that you’ll see in demonstrations is linear regression, because it is easy to teach and easy to understand. In today’s demo, however, we’re going to build a neural network with Keras. Although our demo is in R, Keras actually uses Python on the back end to run TensorFlow. There are other libraries out there which can run neural networks strictly within R (for example, Microsoft Machine Learning’s R implemenation has the RxNeuralNet() function), but we will use Keras in this demo because it is a popular library.

Now that we have an algorithm and implementation in mind, let’s split the data out into training and test subsets. I want to use Country as the partition variable because I want to ensure that we retain some data from each country in the test set. To make this split, I am using the createDataPartition() function in caret. I’ll then split out the data into training and test data.

trainIndex <- caret::createDataPartition(survey_2018$Country, p = 0.7, list = FALSE, times = 1)
train_data <- survey_2018[trainIndex,]
test_data <- survey_2018[-trainIndex,]

We will have 1976 training rows and 841 testing rows.

Once I have this data split, I want to perform some operations on the training data. Specifically, I want to think about the following:

  • One-Hot Encode the categorical data
  • Mean-center the data, so that the mean of each numeric value is 0
  • Scale the data, so that the standard deviation of each value is 1

The bottom two are called normalizing the data. This is a valuable technique when dealing with many algorithms, including neural networks, as it helps with optimizing gradient descent problems.

In order to perform all of these operations, I will create a recipe, using the recipes package.

NOTE: It turns out that normalizing the features results in a slightly worse outcome in this case, so I’m actually going to avoid that. You can uncomment the two sections and run it yourself if you want to try. In some problems, normalization is the right answer; in others, it’s better without normalization.

rec_obj <- recipes::recipe(SalaryUSD ~ ., data = train_data) %>%       # Build out a set of rules we want to follow (a recipe)
  step_dummy(all_nominal(), -all_outcomes()) %>%              # One-hot encode categorical data
  #step_center(all_predictors(), -all_outcomes()) %>%          # Mean-center the data
  #step_scale(all_predictors(), -all_outcomes()) %>%           # Scale the data
  prep(data = train_data)

rec_obj
Data Recipe

Inputs:

      role #variables
   outcome          1
 predictor         17

Training data contained 1976 data points and no missing data.

Operations:

Dummy variables from Country, EmploymentStatus, JobTitle, ... [trained]

Now we can bake our data based on the recipe above. Note that I performed all of these operations only on the training data. If we normalize the training + test data, our optimization function can get a sneak peek at the distribution of the test data based on what is in the training set, and that will bias our result.

After building up the x_ series of data sets, I’ll build vectors which contain the salaries for the training and test data. I need to make sure to remove the SalaryUSD variable; we don’t want to make that available to the trainer as an independent variable!

x_train_data <- recipes::bake(rec_obj, newdata = train_data)
x_test_data <- recipes::bake(rec_obj, newdata = test_data)
y_train_vec <- pull(x_train_data, SalaryUSD)
y_test_vec  <- pull(x_test_data, SalaryUSD)
# Remove the SalaryUSD variable.
x_train_data <- x_train_data[,-1]
x_test_data <- x_test_data[,-1]

At this point, I want to build the Keras model. I’m creating a build_model function in case I want to run this over and over. In a real-life scenario, I would perform various optimizations, do cross-validation, etc. In this scenario, however, I am just going to run one time against the full training data set, and then evaluate it against the test data set.

Inside the function, we start by declaring a Keras model. Then, I add three layers to the model. The first layer is a dense (fully-connected) layer which accepts the training data as inputs and uses the Rectified Linear Unit (ReLU) activation mechanism. This is a decent first guess for activation mechanisms. We then have a dropout layer, which reduces the risk of overfitting on the training data. Finally, I have a dense layer for my output, which will give me the salary.

I compile the model using the RMSProp optimizer. This is a good default optimizer for neural networks, although you might try AdagradAdam, or AdaMax as well. Our loss function is Mean Squared Error, which is good for dealing with finding the error in a regression. Finally, I’m interested in the Mean Absolute Error–that is, the dollar amount difference between our function’s prediction and the actual salary. The closer to $0 this is, the better.

build_model <- function() {
  model <- keras_model_sequential() %>%
    layer_dense(units = 256, input_shape = c(ncol(x_train_data)), activation = "relu") %>%
    layer_dropout(rate = 0.2) %>%
    layer_dense(units = 512, activation = "relu") %>%
    layer_dropout(rate = 0.2) %>%
    layer_dense(units = 1, activation = "linear") # No activation --> linear layer

  # RMSProp is a nice default optimizer for a neural network.
  # Mean Squared Error is a classic loss function for dealing with regression-style problems, whether with a neural network or otherwise.
  # Mean Average Error gives us a metric which directly translates to the number of dollars we are off with our predictions.
  model %>% compile(
    optimizer = "rmsprop",
    loss = "mse",
    metrics = c("mae")
  )
}

Building out this model can take some time, so be patient.

model <- build_model() model %>% fit(as.matrix(x_train_data), y_train_vec, epochs = 100, batch_size = 16, verbose = 0)
result <- model %>% evaluate(as.matrix(x_test_data), y_test_vec)
result
$loss
863814393.60761

$mean_absolute_error
19581.9413644471

What this tells us is that, after generating our model, we are an average of mean_absolute_error dollars off from reality. In my case, that was just under $20K. That’s not an awful amount off. In fact, it’s an alright start, though I wouldn’t trust this model as-is for for my negotiations. With a few other enhancements, we might see that number drop a bit and start getting into the trustworthy territory.

With a real data science project, I would dig further, seeing if there were better algorithms available, cross-validating the training set, etc. As-is, this result isn’t good enough for a production scenario, but we can pretend that it is.

Now let’s test a couple of scenarios. First up, my salaries over time, as well as a case where I moved to Canada last year.  There might be some exchange rate shenanigans but there were quite a few Canadian entrants in the survey so it should be a pretty fair comp.

test_cases <- test_data[1:4, ]

test_cases$SalaryUSD = c(1,2,3,4)
test_cases$Country = c("United States", "United States", "United States", "Canada")
test_cases$YearsWithThisDatabase = c(0, 5, 11, 11)
test_cases$EmploymentStatus = c("Full time employee", "Full time employee", "Full time employee", "Full time employee")
test_cases$JobTitle = c("Developer: App code (C#, JS, etc)", "DBA (General - splits time evenly between writing & tuning queries AND building & troubleshooting servers)", "Manager", "Manager")
test_cases$ManageStaff = c("No", "No", "Yes", "Yes")
test_cases$YearsWithThisTypeOfJob = c(0, 5, 0, 0)
test_cases$OtherPeopleOnYourTeam = c(5, 0, 2, 2)
test_cases$DatabaseServers = c(8, 12, 150, 150)
test_cases$Education = c("Bachelors (4 years)", "Masters", "Masters", "Masters")
test_cases$EducationIsComputerRelated = c("Yes", "Yes", "Yes", "Yes")
test_cases$Certifications = c("No, I never have", "Yes, and they're currently valid", "Yes, but they expired", "Yes, but they expired")
test_cases$HoursWorkedPerWeek = c(40, 40, 40, 40)
test_cases$TelecommuteDaysPerWeek = c("None, or less than 1 day per week", "None, or less than 1 day per week", "None, or less than 1 day per week", "None, or less than 1 day per week")
test_cases$EmploymentSector = c("State/province government", "State/province government", "Private business", "Private business")
test_cases$LookingForAnotherJob = c("No", "Yes", "No", "No")
test_cases$CareerPlansThisYear = c("Stay with the same employer, same role", "Stay with the same role, but change employers", "Stay with the same employer, same role", "Stay with the same employer, same role")
test_cases$Gender = c("Male", "Male", "Male", "Male")

# Why is this only letting me fit two objects at a time?
x_test_cases_1 <- recipes::bake(rec_obj, newdata = head(test_cases,2))
x_test_cases_2 <- recipes::bake(rec_obj, newdata = tail(test_cases,2))
x_test_cases <- rbind(x_test_cases_1, x_test_cases_2)
x_test_cases <- x_test_cases %>% select(-SalaryUSD)

model %>% predict(as.matrix(x_test_cases))
58330.57
75734.77
109289.84
78821.73

The first prediction was pretty close to right, but the next two were off.  Also compare them to my results from last year.  The Canadian rate is interesting considering the exchange rate for this time was about 75-78 US cents per Canadian dollar, and the Canadian rate is about 72%.

Note that I had a bit of difficulty running the bake function against these data sets.  When I tried to build up more than two rows, I would get a strange off-by-one error in R.  For example, here’s what it looks like when I try to use head(test_cases, 3) instead of 2:

Error in data.frame(..., check.names = FALSE): arguments imply differing number of rows: 3, 2
Traceback:

1. recipes::bake(rec_obj, newdata = head(test_cases, 3))
2. bake.recipe(rec_obj, newdata = head(test_cases, 3))
3. bake(object$steps[[i]], newdata = newdata)
4. bake.step_dummy(object$steps[[i]], newdata = newdata)
5. cbind(newdata, as_tibble(indicators))
6. cbind(deparse.level, ...)
7. data.frame(..., check.names = FALSE)
8. stop(gettextf("arguments imply differing number of rows: %s", 
 .     paste(unique(nrows), collapse = ", ")), domain = NA)

I haven’t figured out the answer to that yet, but we’ll hand-wave that problem away for now and keep going with our analysis.

Next, what happens if we change me from Male to Female in these examples?

test_cases$Gender = c("Female", "Female", "Female", "Female")
# Why is this only letting me fit two objects at a time?
x_test_cases_1 <- recipes::bake(rec_obj, newdata = head(test_cases,2))
x_test_cases_2 <- recipes::bake(rec_obj, newdata = tail(test_cases,2))
x_test_cases <- rbind(x_test_cases_1, x_test_cases_2)
x_test_cases <- x_test_cases %>% select(-SalaryUSD)

model %>% predict(as.matrix(x_test_cases))
52563.52
69958.53
103513.19
73491.90

In my scenario, there is a $5,776.65 difference between male and female salaries. There is no causal explanation here (nor will I venture one in this post), but we can see that men earn more than women based on data in this survey.

Conclusion

In today’s post, we used Keras to build up a decent first attempt at a model for predicting data professional salaries.  In reality, there’s a lot more to do before this is ready to roll out, but we’ll leave the subject here and move on to the next topic, so stay tuned.

The Basics Of Data Modeling

This is part five of a series on launching a data science project.

At this point, we have done some analysis and cleanup on a data set.  It might not be perfect, but it’s time for us to move on to the next step in the data science process:  modeling.

Modeling has five major steps, and we’ll look at each step in turn.  Remember that, like the rest of the process, I may talk about “steps” but these are iterative and you’ll bounce back and forth between them.

Feature Engineering

Feature engineering involves creating relevant features from raw data.  A few examples of feature engineering include:

  • Creating indicator flags, such as IsMinimumAge: Age >= 21, or IsManager: NumberOfEmployeesManaged > 0.  These are designed to help you slice observations and simplify model logic, particularly if you’re building something like a decision tree.
  • Calculations, such as ClickThroughRate = Clicks / Impressions.  Note that this definition doesn’t imply multicollinearity, though, as ClickThroughRate isn’t linearly related to either Clicks or Impressions.
  • Geocoding latitude and longitude from a street address.
  • Aggregating data.  That could be aggregation by day, by week, by hour, by 36-hour period, whatever.
  • Text processing:  turning words into arbitrary numbers for numeric analysis.  Common techniques for this include TF-IDF and word2vec.

Feature Selection

Once we’ve engineered interesting features, we want to use feature selection to winnow down the available set, removing redundant, unnecessary, or highly correlated features.  There are a few reasons that we want to perform feature selection:

  1. If one explanatory variable can predict another, we have multicollinearity, which can make it harder to give credit to the appropriate variable.
  2. Feature selection makes it easier for a human to understand the model by removing irrelevant or redundant features.
  3. We can perform more efficient training with fewer variables.
  4. We reduce the risk of an irrelevant or redundant feature causing spurious correlation.

For my favorite example of spurious correlation:

The only question here is, which causes which?

Model Training

Now that we have some data and a clue of what we’re going to feed into an algorithm, it’s time to step up our training regimen.  First up, we’re going to take some percentage of our total data and designate it for training and validation, leaving the remainder for evaluation (aka, test).  There are no hard rules on percentages, but a typical reserve rate is about 70-80% for training/validation and 20-30% for test.  We ideally want to select the data randomly but also include the relevant spreads and distributions of observations by pertinent variables in our training set; fortunately, there are tools available which can help us do just this, and we’ll look at them in a bit.

First up, though, I want to cover the four major branches of algorithms.

Supervised Learning

The vast majority of problems are supervised learning problems.  The idea behind a supervised learning problem is that we have some set of known answers (labels).  We then train a model to map input data to those labels in order to have the model predict the correct answer for unlabeled records.

Going back to the first post in this series, I pointed out that you have to listen to the questions people ask.  Here’s where that pays off:  the type of algorithm we want to choose depends in part on the nature of those questions.  Major supervised learning classes and their pertinent driving questions include:

  • Regression — How many / how much?
  • Classification — Which?
  • Recommendation — What next?

For example, in our salary survey, we have about 3000 labeled records:  3000(ish) cases where we know the salary in USD based on what people have reported.  My goal is to train a model which can then take some new person’s inputs and spit out a reasonable salary prediction.  Because my question is “How much money should we expect a data professional will make?” we will solve this using regression techniques.

Unsupervised Learning

With unsupervised learning, we do not know the answers beforehand, so we’re trying to derive answers within the data.  Typically, we’ll use unsupervised learning to gain more insight about the data set, which can hopefully give us some labels we can use to convert this into a relevant supervised learning problem.  The top forms of unsupervised learning include:

  • Clustering — How can we segment?
  • Dimensionality reduction — What of this data is useful?

Typically your business users won’t know or care about dimensionality reduction (that is, techniques like Principal Component Analysis) but we as analysts can use dimensionality reduction to narrow down on useful features.

Self-Supervised Learning

Wait, isn’t self-supervised learning just a subset of supervised learning?  Sure, but it’s pretty useful to look at on its own.  Here, we use heuristics to guesstimate labels and train the model based on those guesstimates.  For example, let’s say that we want to train a neural network or Markov chain generator to read the works of Shakespeare and generate beautiful prose for us.  The way the recursive model would work is to take what words have already been written and then predict the most likely next word or punctuation character.

We don’t have “labeled” data within the works of Shakespeare, though; instead, our training data’s “label” is the next word in the play or sonnet.  So we train our model based on the chains of words, treating the problem as interdependent rather than a bunch of independent words just hanging around.

Reinforcement Learning

Reinforcement learning is where we train an agent to observe its environment and use those environmental clues to make a decision.  For example, there’s a really cool video from SethBling about MariFlow:

The idea, if you don’t want to watch the video, is that he trained a recurrent neural network based on hours and hours of his Mario Kart gameplay.  The neural network has no clue what a Mario Kart is, but the screen elements below show how it represents the field of play and state of the game, and uses those inputs to determine which action to take next.

“No, mom, I’m playing this game strictly for research purposes!”

Choose An Algorithm

Once you understand the nature of the problem, you can choose the form of your destructor algorithm.  There are often several potential algorithms which can solve your problem, so you will want to try different algorithms and compare.  There are a few major trade-offs between algorithms, so each one will have some better or worse combination of the following features:

  • Accuracy and susceptibility to overfitting
  • Training time
  • Ability for a human to be able to understand the result
  • Number of hyperparameters
  • Number of features allowed.  For example, a model like ARIMA doesn’t give you many features—it’s just the label behavior over time.

Microsoft has a nice algorithm cheat sheet that I recommend checking out:

It is, of course, not comprehensive, but it does set you in the right direction.  For example, we already know that we want to predict values, and so we’re going into the Regression box in the bottom-left.  From there, we can see some of the trade-offs between different algorithms.  If we use linear regression, we get fast training, but the downside is that if our dependent variable is not a linear function of the independent variables, then we won’t end up with a good result.

By contrast, a neural network regression tends to be fairly accurate, but can take a long time to finish or require expensive hardware to finish in any reasonable time.

Once you have an algorithm, features, and labels (if this is a supervised learning problem), you can train the model.  Training a model involves solving a system of equations, minimizing a loss function.  For example, here is an example of a plot with a linear regression thrown in:

This plot might look familiar if you’re read my ggplot2 series.

In this chart, I have a straight line which represents the best fitting line for our data points, where best fit is defined as the line which minimizes the sum of the squares of errors (i.e., the sum of the square of the distance between the dot and our line).  Computers are great at this kind of math, so as long as we set up the problem the right way and tell the computer what we want it to do, it can give us back an answer.

But we’ve got to make sure it’s a good answer.  That’s where the next section helps.

Validate The Model

Instead of using up all of our data for training, we typically want to perform some level of validation within our training data set to ensure that we are on the right track and are not overfitting our model.  Overfitting happens when a model latches onto the particulars of a data set, leaving it at risk of not being able to generalize to new data.  The easiest way to tell if you are overfitting is to test your model against unseen data.  If there is a big dropoff in model accuracy between the training and testing phases, you are likely overfitting.

Here’s one technique for validation:  let’s say that we reserved 70% of our data for training.  Of the 70%, we might want to slice off 10% for validation, leaving 60% for actual training.  We feed the 60% of the data to our algorithm, generating a model.  Then we predict the outcomes for our validation data set and see how close we were to reality, and how far off the accuracy rates are for our validation set versus our training set.

Another technique is called cross-validation.  Cross-validation is a technique where we slice and dice the training data, training our model with different subsets of the total data.  The purpose here is to find a model which is fairly robust to the particulars of a subset of training data, thereby reducing the risk of overfitting.  Let’s say that we cross-validate with 4 slices.  In the first step, we train with the first 3/4 of the data, and then validate with the final 1/4.  In the second step, we train with slices 1, 2, and 4 and validate against slice 3.  In the third step, we train with 1, 3, and 4 and validate against slice 2.  Finally, we train with 2, 3, and 4 and validate against slice 1.  We’re looking to build up a model which is good at dealing with each of these scenarios, not just a model which is great at one of the four but terrible at the other three.

Often times, we won’t get everything perfect on the first try.  That’s when we move on to the next step.

Tune The Model

Most models have hyperparameters.  For example, a neural network has a few hyperparameters, including the number of training epochs, the number of layers, the density of each layer, and dropout rates.  For another example, random forests have hyperparameters like the maximum size of each decision tree and the total number of decision trees in the forest.

We tune our model’s hyperparameters using the validation data set.  With cross-validation, we’re hoping that our tuning will not accidentally lead us down the road to spurious correlation, but we have something a bit better than hope:  we have secret data.

Evaluate The Model

Model evaluation happens when we send new, never before seen data to the model.  Remember that 20-30% that we reserved early on?  This is where we use it.

Now, we want to be careful and make sure not to let any information leak into the training data.  That means that we want to split this data out before normalizing or aggregating the training data set, and then we want to apply those same rules to the test data set.  Otherwise, if we normalize the full data set and then split into training and test, a smart model can surreptitiously  learn things about the test data set’s distribution and could train toward that, leading to overfitting our model to the test data and leaving it less suited for the real world.

Another option, particularly useful for unlabeled or self-learning examples, is to build a fitness function to evaluate the model.  Genetic algorithms (for a refresher, check out my series) are a common tool for this.  For example, MarI/O uses a genetic algorithm to train a neural network how to play Super Mario World.

He’s no Evander Holyfield, but Mario’s worth a genetic algorithm too.

Conclusion

Just like with data processing, I’m going to split this into two parts.  Today, we’ve looked at some of the theory behind modeling.  Next time around, we’re going to implement a regression model to try to predict salaries.

Data Processing: An Example

This is part four of a series on launching a data science project.

An Example Of Data Processing

Last time around, I spent a lot of time talking about data acquisition, data cleansing, and basic data analysis.  Today, we’re going to walk through a little bit of it with the data professional salary survey.

First, let’s install some packages:

if(!require(tidyverse)) {
  install.packages("tidyverse", repos = "http://cran.us.r-project.org")
  library(tidyverse)
}

if(!require(XLConnect)) {
  install.packages("XLConnect", repos = "http://cran.us.r-project.org")
  library(XLConnect)
}

if(!require(caret)) {
  install.packages("caret", repos = "http://cran.us.r-project.org")
  library(caret)
}

if(!require(recipes)) {
  install.packages("recipes", repos = "http://cran.us.r-project.org")
  library(recipes)
}

if(!require(data.table)) {
  install.packages("data.table", repos = "http://cran.us.r-project.org")
  library(data.table)
}

if(!require(devtools)) {
  install.packages("devtools", repos = "http://cran.us.r-project.org")
  library(devtools)
}

if(!require(keras)) {
  devtools::install_github("rstudio/keras")
  library(keras)
  install_keras(method = "auto", conda = "auto", tensorflow = "default", extra_packages = NULL)
}

The tidyverse package is a series of incredibly useful libraries in R, and I can’t think of doing a data science project in R without it. The XLConnectpackage lets me read an Excel workbook easily and grab the salary data without much hassle. The caret library provides some helpful tooling for working with data, including splitting out test versus training data, like we’ll do below. The recipes package will be useful for normalizing data later, and we will use data.table to get a glimpse at some of our uneven data. We need the devtools package to install keras from GitHub. Keras is a deep learning library which implements several neural network libraries, including TensorFlow, which we will use later in this series. We need to install TensorFlow on our machine. Because this is a small data set, and because I want this to run on machines without powerful GPUs, I am using the CPU-based version of TensorFlow. Performance should still be adequate for our purposes.

Once we have the required packages loaded, we will then load the Excel workbook. I have verified the Excel worksheet and data region are correct, so we can grab the survey from the current directory and load it into salary_data.

wb <- XLConnect::loadWorkbook("2018_Data_Professional_Salary_Survey_Responses.xlsx")
salary_data <- XLConnect::readWorksheet(wb, sheet = "Salary Survey", region = "A4:Z6015")

We can use the glimpse function inside the tidyverse to get a quick idea of what our salary_data dataframe looks like. In total, we have 6011 observations of 26 variables, but this covers two survey years: 2017 and 2018. Looking at the variable names, we can see that there are some which don’t matter very much (like Timestamp, which is when the user filled out the form; and Counter, which is just a 1 for each record.

glimpse(salary_data)
Observations: 6,011
Variables: 26
$ Survey.Year                <dbl> 2017, 2017, 2017, 2017, 2017, 2017, 2017...
$ Timestamp                  <dttm> 2017-01-05 05:10:20, 2017-01-05 05:26:2...
$ SalaryUSD                  <chr> "200000", "61515", "95000", "56000", "35...
$ Country                    <chr> "United States", "United Kingdom", "Germ...
$ PostalCode                 <chr> "Not Asked", "Not Asked", "Not Asked", "...
$ PrimaryDatabase            <chr> "Microsoft SQL Server", "Microsoft SQL S...
$ YearsWithThisDatabase      <dbl> 10, 15, 5, 6, 10, 15, 16, 4, 3, 8, 4, 22...
$ OtherDatabases             <chr> "MySQL/MariaDB", "Oracle, PostgreSQL", "...
$ EmploymentStatus           <chr> "Full time employee", "Full time employe...
$ JobTitle                   <chr> "DBA", "DBA", "Other", "DBA", "DBA", "DB...
$ ManageStaff                <chr> "No", "No", "Yes", "No", "No", "No", "No...
$ YearsWithThisTypeOfJob     <dbl> 5, 3, 25, 2, 10, 15, 11, 1, 2, 10, 4, 8,...
$ OtherPeopleOnYourTeam      <chr> "2", "1", "2", "None", "None", "None", "...
$ DatabaseServers            <dbl> 350, 40, 100, 500, 30, 101, 20, 25, 3, 5...
$ Education                  <chr> "Masters", "None (no degree completed)",...
$ EducationIsComputerRelated <chr> "No", "N/A", "Yes", "No", "Yes", "No", "...
$ Certifications             <chr> "Yes, and they're currently valid", "No,...
$ HoursWorkedPerWeek         <dbl> 45, 35, 45, 40, 40, 35, 40, 36, 40, 45, ...
$ TelecommuteDaysPerWeek     <chr> "1", "2", "None, or less than 1 day per ...
$ EmploymentSector           <chr> "Private business", "Private business", ...
$ LookingForAnotherJob       <chr> "Yes, but only passively (just curious)"...
$ CareerPlansThisYear        <chr> "Not Asked", "Not Asked", "Not Asked", "...
$ Gender                     <chr> "Not Asked", "Not Asked", "Not Asked", "...
$ OtherJobDuties             <chr> "Not Asked", "Not Asked", "Not Asked", "...
$ KindsOfTasksPerformed      <chr> "Not Asked", "Not Asked", "Not Asked", "...
$ Counter                    <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...

Our first data cleansing activity will be to filter our data to include just 2018 results, which gives us a sample size of 3,113 participants. There are also results for 2017, but they asked a different set of questions and we don’t want to complicate the analysis or strip out the new 2018 questions.

survey_2018 <- filter(salary_data, Survey.Year == 2018)
nrow(survey_2018) # << 3113 records returned

Looking at the survey, there are some interesting data points that we want:

  • SalaryUSD (our label, that is, what we are going to try to predict)
  • Country
  • YearsWithThisDatabase
  • EmploymentStatus
  • JobTitle
  • ManageStaff
  • YearsWithThisTypeOfJob
  • OtherPeopleOnYourTeam
  • DatabaseServers
  • Education
  • EducationIsComputerRelated
  • Certifications
  • HoursWorkedPerWeek
  • TelecommuteDaysPerWeek
  • EmploymentSector
  • LookingForAnotherJob
  • CareerPlansThisYear
  • Gender

For each of these variables, we want to see the range of options and perform any necessary cleanup. The first thing I’d look at is the cardinality of each variable, followed by a detailed anlaysis of the smaller ones.

PrimaryDatabase is another variable which looks interesting, but it skews so heavily toward SQL Server that there’s more noise than signal to it. Because there are so many platforms with 10 or fewer entries and about 92% of entrants selected SQL Server, we’ll throw it out.

rapply(survey_2018, function(x) { length(unique(x)) })

Survey.Year – 1
Timestamp – 3112
SalaryUSD – 865
Country – 73
PostalCode – 1947
[… continue for a while]

unique(survey_2018$Country)
  1. ‘United States’
  2. ‘Australia’
  3. ‘Spain’
  4. ‘United Kingdom’
    [… continue for a while]

 

unique(survey_2018$EmploymentStatus)
  1. ‘Full time employee’
  2. ‘Full time employee of a consulting/contracting company’
  3. ‘Independent consultant, contractor, freelancer, or company owner’
  4. ‘Part time’

We can use the setDT function on data.table to see just how many records we have for each level of a particular factor. For example, we can see the different entries for PrimaryDatabase and EmploymentSector below. Both of these are troublesome for our modeling because they both have a number of levels with 1-2 entries. This makes it likely that we will fail to collect a relevant record in our training data set, and that will mess up our model later. To rectify this, I am going to remove PrimaryDatabase as a feature and remove the two students from our sample.

data.table::setDT(survey_2018)[, .N, keyby=PrimaryDatabase]

To the three MongoDB users: you have my sympathy.

data.table::setDT(survey_2018)[, .N, keyby=EmploymentSector]

In a way, aren’t we all students? No. Only two of us are.

Most of these columns came from dropdown lists, so they’re already fairly clean. But there are some exceptions to the rule. They are:

  • SalaryUSD
  • YearsWithThisDatabase
  • YearsWithThisTypeOfJob
  • DatabaseServers
  • HoursWorkedPerWeek
  • Gender

All of these were text fields, and whenever a user gets to enter text, you can assume that something will go wrong. For example:

survey_2018 %>%
  distinct(YearsWithThisDatabase) %>%
  arrange(desc(YearsWithThisDatabase)) %>%
  slice(1:10)

Some are older than they seem.

Someone with 53,716 years working with their primary database of choice? That’s commitment! You can also see a couple of people who clearly put in the year they started rather than the number of years working with it, and someone who maybe meant 10 years? But who knows, people type in weird stuff.

Anyhow, let’s see how much that person with at least 10 thousand years of experience makes:

survey_2018 %>%
  filter(YearsWithThisDatabase > 10000)

Experience doesn’t pay after the first century or two.

That’s pretty sad, considering their millennia of work experience. $95-98K isn’t even that great a number.

Looking at years of experience with their current job roles, people tend to be more reasonable:

survey_2018 %>%
  distinct(YearsWithThisTypeOfJob) %>%
  arrange(desc(YearsWithThisTypeOfJob)) %>%
  slice(1:10)

Next up, we want to look at the number of database servers owned. 500,000+ database servers is a bit excessive. Frankly, I’m suspicious about any numbers greater than 5000, but because I can’t prove it otherwise, I’ll leave them be.

survey_2018 %>%
  distinct(DatabaseServers) %>%
  arrange(desc(DatabaseServers)) %>%
  slice(1:5)

survey_2018 %>%
  filter(DatabaseServers >= 5000) %>%
  arrange(desc(DatabaseServers))

500K servers is a lot of servers.

The first entry looks like bogus data: a $650K salary, a matching postal code, and 500K database servers, primarily in RDS? Nope, I don’t buy it.

The rest don’t really look out of place, except that I think they put in the number of databases and not servers. For these entrants, I’ll change the number of servers to the median to avoid distorting things.

Now let’s look at hours per week:

survey_2018 %>%
  distinct(HoursWorkedPerWeek) %>%
  arrange(desc(HoursWorkedPerWeek)) %>%
  slice(1:10)

One of these numbers is not like the others.  The rest of them are just bad.

To the person who works 200 hours per week: find a new job. Your ability to pack more than 7*24 hours of work into 7 days is too good to waste on a job making just $120K per year.

survey_2018 %>%
  filter(HoursWorkedPerWeek >= 168) %>%
  arrange(desc(HoursWorkedPerWeek))

What would I do with an extra day and a half per week? Sleep approximately an extra day and a half per week.

As far as Gender goes, there are only three with enough records to be significant: Male, Female, and Prefer not to say. We’ll take Male and Female and bundle the rest under “Other” to get a small but not entirely insignificant set there.

survey_2018 %>%
  group_by(Gender) %>%
  summarize(n = n())

To the one Reptilian in the survey, I see you and I will join forces with Rowdy Roddy Piper to prevent you from taking over our government.

survey_2018 %>%
  group_by(Country) %>%
  summarize(n = n()) %>%
  filter(n >= 20)

Probably the most surprising country on this list is The Netherlands.  India is a close second, but for the opposite reason.

There are only fifteen countries with at least 20 data points and just eight with at least 30. This means that we won’t get a great amount of information from cross-country comparisons outside of the sample. Frankly, I might want to limit this to just the US, UK, Canada, and Australia, as the rest are marginal, but for this survey analysis, I’ll keep the other eleven.

Building Our Cleaned-Up Data Set

Now that we’ve performed some basic analysis, we will clean up the data set. I’m doing most of the cleanup in a single operation, but I do have some comment notes here, particularly around the oddities with SalaryUSD. The SalaryUSD column has a few problems:

  • Some people put in pennies, which aren’t really that important at the level we’re discussing. I want to strip them out.
  • Some people put in delimiters like commas or decimal points (which act as commas in countries like Germany). I want to strip them out, particularly because the decimal point might interfere with my analysis, turning 100.000 to $100 instead of $100K.
  • Some people included the dollar sign, so remove that, as well as any spaces.

It’s not a perfect regex, but it did seem to fix the problems in this data set at least.

valid_countries <- survey_2018 %>%
                    group_by(Country) %>%
                    summarize(n = n()) %>%
                    filter(n >= 20)

# Data cleanup
survey_2018 <- salary_data %>%
  filter(Survey.Year == 2018) %>%
  filter(HoursWorkedPerWeek < 200) %>%
  # There were only two students in the survey, so we will exclude them here.
  filter(EmploymentSector != "Student") %>%
  inner_join(valid_countries, by="Country") %>%
  mutate(
    SalaryUSD = stringr::str_replace_all(SalaryUSD, "\\$", "") %>%
      stringr::str_replace_all(., ",", "") %>%
      stringr::str_replace_all(., " ", "") %>%
      # Some people put in pennies.  Let's remove anything with a decimal point and then two numbers.
      stringr::str_replace_all(., stringr::regex("\\.[0-9]{2}$"), "") %>%
      # Now any decimal points remaining are formatting characters.
      stringr::str_replace_all(., "\\.", "") %>%
      as.numeric(.),
    # Some people have entered bad values here, so set them to the median.
    YearsWithThisDatabase = case_when(
      (YearsWithThisDatabase > 32) ~ median(YearsWithThisDatabase),
      TRUE ~ YearsWithThisDatabase
    ),
    # Some people apparently entered number of databases rather than number of servers.
    DatabaseServers = case_when(
      (DatabaseServers >= 5000) ~ median(DatabaseServers),
      TRUE ~ DatabaseServers
    ),
    EmploymentStatus = as.factor(EmploymentStatus),
    JobTitle = as.factor(JobTitle),
    ManageStaff = as.factor(ManageStaff),
    OtherPeopleOnYourTeam = as.factor(OtherPeopleOnYourTeam),
    Education = as.factor(Education),
    EducationIsComputerRelated = as.factor(EducationIsComputerRelated),
    Certifications = as.factor(Certifications),
    TelecommuteDaysPerWeek = as.factor(TelecommuteDaysPerWeek),
    EmploymentSector = as.factor(EmploymentSector),
    LookingForAnotherJob = as.factor(LookingForAnotherJob),
    CareerPlansThisYear = as.factor(CareerPlansThisYear),
    Gender = as.factor(case_when(
      (Gender == "Male") ~ "Male",
      (Gender == "Female") ~ "Female",
      TRUE ~ "Other"
    ))
  ) 

Now we can pare out variables we don’t need. Some of these, like postal code, are interesting but we just don’t have enough data for it to make sense. Others, like Kinds of Tasks Performed or Other Job Duties, have too many varieties for us to make much sense with a first pass. They might be interesting in a subsequent analysis, though.

survey_2018 <- survey_2018 %>%
  # One person had a salary of zero.  That's just not right.
  filter(SalaryUSD > 0) %>%
  select(-Counter, -KindsOfTasksPerformed, -OtherJobDuties, -OtherDatabases, -Timestamp, -Survey.Year, 
         -PostalCode, -n, -PrimaryDatabase)

Now that we have our salary data fixed, we can finally look at outliers. I’d consider a salary of $500K a year to be a bit weird for this field. It’s not impossible, but I am a little suspicious. I am very suspicious of the part-timer making $1.375 million, the federal employee making $1 million, or the New Zealander making $630K at a non-profit.

I’m kind of taking a risk by removing these, but they’re big enough outliers that they can have a real impact on our analysis if they’re bad data.

survey_2018 %>%
  filter(SalaryUSD > 500000) %>%
  arrange(desc(SalaryUSD))

I think I’d be willing to accept $1.4 million a year to be a manager of none.

On the other side, there are 12 people who say they earned less than $5K a year. Those also seem wrong. Some of them look like dollars per hour, and maybe some are monthly salary. I’m going to strip those out.

survey_2018 %>%
  filter(SalaryUSD < 5000) %>%
  arrange(desc(SalaryUSD))

For just over a dollar a week, you can hire a data architect.

survey_2018 <- filter(survey_2018, SalaryUSD >= 5000 & SalaryUSD <= 500000)

Data Analysis

We did some of the data analysis up above. We can do additional visualization and correlation studies. For example, let’s look at a quick distribution of salaries after our cleanup work:

summary(survey_2018$SalaryUSD)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   5000   70000   92000   95186  115000  486000

We can also build a histogram pretty easily using the ggplot2 library. This shows the big clump of database professionals earning beween $70K and $115K per year. This salary distribution does skew right a bit, as you can see.

ggplot(data = survey_2018, mapping = aes(x = SalaryUSD)) +
  geom_histogram() +
  theme_minimal() +
  scale_x_log10(label = scales::dollar)

Not including that guy making $58 a year.

We can also break this down to look by primary job title, though I’ll limit to a couple of summaries instead of showing a full picture.

survey_2018 %>% filter(JobTitle == "Data Scientist") %>% select(SalaryUSD) %>% summary(.)
   SalaryUSD     
 Min.   : 45000  
 1st Qu.: 76250  
 Median :111000  
 Mean   :102000  
 3rd Qu.:122000  
 Max.   :160000
survey_2018 %>% filter(JobTitle == "Developer: App code (C#, JS, etc)") %>% select(SalaryUSD) %>% summary(.)
   SalaryUSD     
 Min.   : 22000  
 1st Qu.: 60000  
 Median : 84000  
 Mean   : 84341  
 3rd Qu.:105000  
 Max.   :194000
survey_2018 %>% filter(JobTitle == "Developer: T-SQL") %>% select(SalaryUSD) %>% summary(.)
   SalaryUSD     
 Min.   : 12000  
 1st Qu.: 66000  
 Median : 87000  
 Mean   : 88026  
 3rd Qu.:110000  
 Max.   :300000

This fit pretty well to my biases, although the max Data Scientist salary seems rather low.

Conclusions

This is only a tiny sample of what I’d want to do with a real data set, but it gives you an idea of the kinds of things we look at and the kinds of things we need to fix before a data set becomes useful.

In the next post, we will get started with the wide world of modeling.