Machine Learning with .NET: Predictions

This is part four in a series on Machine Learning with .NET.

Thanks to some travels, part 4 in the series is a bit later than I originally anticipated. In the prior two posts, we looked at different ways to create, train, and test models. In this post, we’ll use a trained model and generate predictions from it. We’ll go back to the model from part two.

Predictions in a Jar

Our first step will be to create a test project. This way, we can try out our code without the commitment of a real project. Inside that test project, I’m going to include NUnit bits and prep a setup method. I think I also decided to figure out how many functional programming patterns I could violate in a single file, but that was an ancillary goal. Here is the code file; we’ll work through it block by block.

Setting Things Up

First, we have our setup block.

public class Tests
{
	MLContext mlContext;
	BillsModelTrainer bmt;
	IEstimator<ITransformer> trainer;
	ITransformer model;
	PredictionEngineBase<RawInput, Prediction> predictor;
		

	[SetUp]
	public void Setup()
	{
		mlContext = new MLContext(seed: 9997);
		bmt = new BillsModelTrainer();

		var data = bmt.GetRawData(mlContext, "Resources\\2018Bills.csv");
		var split = mlContext.Data.TrainTestSplit(data, testFraction: 0.25);

		trainer = mlContext.MulticlassClassification.Trainers.NaiveBayes(labelColumnName: "Label", featureColumnName: "Features");
		model = bmt.TrainModel(mlContext, split.TrainSet, trainer);

		predictor = mlContext.Model.CreatePredictionEngine<RawInput, Prediction>(model);
	}

In here, I have mutable objects for context, the Bills model trainer class, the output trainer, the ouptut model, and a prediction engine which takes data of type RawInput and generates predictions of type Prediction. This means that any data we feed into the model must have the same shape as our raw data.

The Setup() method will run before each batch of tests. In it, we build a new ML context with a pre-generated seed of 9997. That way, every time I run this I’ll get the same outcomes. We also new up our Bills model trainer and grab raw data from the Resources folder. I’m going to use the same 2018 game data that we looked at in part 2 of the series, as that’s all of the data I have. We’ll pull out 25% of the games for testing and retain the other 75% for training. For the purposes of this demo, I’m using the Naive Bayes classifier for my trainer, and so TrainModel() takes but a short time, returning me a model.

To use this model for prediction generation, I call mlContext.Model.CreatePredictionEngine with the model as my parameter and explain that I’ll give the model an object of type RawInput and expect back an object of type Prediction. The Prediction class is trivial:

public class Prediction
{
	[ColumnName("PredictedOutcome")]
	public string Outcome { get; set; }
}

I would have preferred the option to get back a simple string but it has to be a custom class.

Knocking Things Down

Generating predictions is easy once you have the data you need. Here’s the test function, along with five separate test cases:

[TestCase(new object[] { "Josh Allen", "Home", 17, "Robert Foster", "LeSean McCoy", "Win" })]
[TestCase(new object[] { "Josh Allen", "Away", 17, "Robert Foster", "LeSean McCoy", "Win" })]
[TestCase(new object[] { "Josh Allen", "Home", 17, "Kelvin Benjamin", "LeSean McCoy", "Win" })]
[TestCase(new object[] { "Nathan Peterman", "Home", 17, "Kelvin Benjamin", "LeSean McCoy", "Loss" })]
[TestCase(new object[] { "Josh Allen", "Away", 7, "Charles Clay", "LeSean McCoy", "Loss" })]
public void TestModel(string quarterback, string location, float numberOfPointsScored,
	string topReceiver, string topRunner, string expectedOutcome)
{
	var outcome = predictor.Predict(new RawInput
	{
		Game = 0,
		Quarterback = quarterback,
		Location = location,
		NumberOfPointsScored = numberOfPointsScored,
		TopReceiver = topReceiver,
		TopRunner = topRunner,
		NumberOfSacks = 0,
		NumberOfDefensiveTurnovers = 0,
		MinutesPossession = 0,
		Outcome = "WHO KNOWS?"
	});

	Assert.AreEqual(expectedOutcome, outcome.Outcome);
}

What I’m doing here is taking in a set of parameters that I’d expect my users could input: quarterback name, home/away game, number of points scored, leading receiver, and leading scorer. I also need to pass in my expected test result—that’s not something I’d expect a user to give me, but is important for test cases to make sure that my model is still scoring things appropriately.

The Predict() method on my predictor takes in a RawInput object, so I can generate that from my input data. Recall that we ignore some of those raw inputs, including game, number of sacks, number of defensive turnovers, and minutes of possession. Instead of bothering my users with that data, I just pass in defaults because I know we don’t use them. In a production-like scenario, I’m more and more convinced that I’d probably do all of this cleanup before model training, especially when dealing with wide data sets with dozens or maybe hundreds of inputs.

Note as well that I need to pass in the outcome. That outcome doesn’t have to be our best guess or anything reasonable—it just needs to have the same data type as our original input data set.

What we get back is a series of hopefully-positive test results:

Even in this model, Kelvin Benjamin drops the ball.

Predictions are that easy: a simple method call and you get back an outcome of type Prediction which has a single string property called Outcome.

Trying Out Different Algorithms

We can also use this test project to try out different algorithms and see what performs better given our data. To do that, I’ll build an evaluation tester. The entire method looks like this:

[TestCase(new object[] { "Naive Bayes" })]
[TestCase(new object[] { "L-BFGS" })]
[TestCase(new object[] { "SDCA Non-Calibrated" })]
public void BasicEvaluationTest(string trainerToUse)
{
	mlContext = new MLContext(seed: 9997);
	bmt = new BillsModelTrainer();

	var data = bmt.GetRawData(mlContext, "Resources\\2018Bills.csv");
	var split = mlContext.Data.TrainTestSplit(data, testFraction: 0.4);

	// If we wish to review the split data, we can run these.
	var trainSet = mlContext.Data.CreateEnumerable<RawInput>(split.TrainSet, reuseRowObject: false);
	var testSet = mlContext.Data.CreateEnumerable<RawInput>(split.TestSet, reuseRowObject: false);

	IEstimator<ITransformer> newTrainer;
	switch (trainerToUse)
	{
		case "Naive Bayes":
			newTrainer = mlContext.MulticlassClassification.Trainers.NaiveBayes(labelColumnName: "Label", featureColumnName: "Features");
			break;
		case "L-BFGS":
			newTrainer = mlContext.MulticlassClassification.Trainers.LbfgsMaximumEntropy(labelColumnName: "Label", featureColumnName: "Features");
			break;
		case "SDCA Non-Calibrated":
			newTrainer = mlContext.MulticlassClassification.Trainers.SdcaNonCalibrated(labelColumnName: "Label", featureColumnName: "Features");
			break;
		default:
			newTrainer = mlContext.MulticlassClassification.Trainers.NaiveBayes(labelColumnName: "Label", featureColumnName: "Features");
			break;
	}

	var newModel = bmt.TrainModel(mlContext, split.TrainSet, newTrainer);
	var metrics = mlContext.MulticlassClassification.Evaluate(newModel.Transform(split.TestSet));

	Console.WriteLine($"Macro Accuracy = {metrics.MacroAccuracy}; Micro Accuracy = {metrics.MicroAccuracy}");
	Console.WriteLine($"Confusion Matrix with {metrics.ConfusionMatrix.NumberOfClasses} classes.");
	Console.WriteLine($"{metrics.ConfusionMatrix.GetFormattedConfusionTable()}");

	Assert.AreNotEqual(0, metrics.MacroAccuracy);
}

Basically, we’re building up the same set of operations that we did in the Setup() method, but because that setup method has mutable objects, I don’t want to run the risk of overwriting the “real” model with one of these tests, so we do everything locally.

The easiest way I could think of to handle multiple trainers in my test case was to put in a switch statement over my set of valid algorithms. That makes the method brittle with respect to future changes (where every new algorithm I’d like to test involves modifying the test method itself) but it works for our purposes here. After choosing the trainer, we walk through the same modeling process and then spit out a few measures: macro and micro accuracy, as well as a confusion matrix.

To understand macro versus micro accuracy (with help from ML.NET and Sebastian Raschka):

  • Micro accuracy sums up all of your successes and failures, whether the outcome is win, lose, draw, suspension due to blizzard, or cancellation on account of alien abduction. We add up all of those cases and if we’re right 95 out of 100 times, our micro accuracy is 95%.
  • Macro accuracy calculates the harmonic mean of all of the individual classes. If our accuracies for different outcomes are: [ 85% (win), 80% (loss), 50% (draw), 99% (suspension due to blizzard), 33% (cancellation on account of alien abduction) ], we sum them up and divide by the number of outcomes: (0.85 + 0.8 + 0.5 + 0.99 + 0.33) / 5 = 69.4%.

Both of these measures are useful but macro accuracy has a tendency to get pulled down when a small category is wrong. Let’s say that there are 3 alien abduction scenarios out of 100,000 games and we got 1 of those 3 correct. Unless we were trying to predict for alien abductions, the outcome is uncommon enough that it’s relatively unimportant that we get it right. Our macro accuracy is 69.4% but if we drop that class, we’re up to 78.5%. For this reason, micro accuracy is typically better for heavily imbalanced classes.

Finally, we print out a confusion matrix, which shows us what each model did in terms of predicted category versus actual outcome.

Conclusion

Generating predictions with ML.NET is pretty easy, definitely one of the smoothest experiences when working with this product. Although we looked at a test in the blog post, applying this to real code is simple as well as you can see in my MVC project demo code (particularly the controller). In this case, I made the conscious decision to train anew at application start because training is fast. In a real project, we’d want to save a model to disk and load it, which is what we’ll cover next time.

One thought on “Machine Learning with .NET: Predictions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s