A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.


Page Not Found

Whatever it is you’re looking for, this is not the place it’s at.





Why does NLP need sociolinguistics?


This talks covers the basics of sociolinguisitics and discusses why it’s important to considering linguistic variation when designing NLP applications.

Intro to Kaggle: XGBoost!


This workshop was both an introduction to Kaggle and a beginner-friendly workshop on XGBoost algorithm. You’ll need to provide some info to watch the video, but the same content is covered in the code.

Character Encoding and You�


Why does your text output have all those black boxes in it? Why can’t it handle Portuguese? The answer is most likely “character encoding”. This talk will cover some of the common character encoding gotchas and cover some defensive programming practices to help your code handle multiple encodings.

Socially-Stratified Validation for ML Fairness


In this talk, I cover some of the frameworks used to think about fairness in machine learning. Then I turn to more practical matters of determining which social factors are important in machine leaning, how to find appropriate validation data, and considerations when selecting metrics. Finally, I walk through a sample socially-stratified validation pipeline.

How to find stories in data through visualization


Working with data is a kind of interview - it is a complex back-and-forth, drawing out the expressiveness of data. The process is often visual, depending heavily on a sequence of graphical displays, “visualizations.” This three-hour workshop will focus on the concepts and skills you need to use data visualization effectively as part of your reporting practice - to conduct a data interview. You will learn how to spot trends, highlight changes over time, identify outliers, make meaningful comparisons, and describe important patterns in your data - all through the effective use of visualization strategies. This class will be based in the R language and distributed through Jupyter notebooks. These pre-built examples can later be customized to suit your own projects when you return to your newsroom.

How to Give a Lightning Talk


Lightening talks are quick talks, usually under 5 minutes. The short format makes the great for first time speakers! This is a very meta lightening talk on how to give a lightening talk, and covers how to develop your talk, practice it and some of my best public-speaking tips.

Reproducible Research Best Practices (highlighting Kaggle Kernels)


In this workshop, we’ll take an existing research project and make it fully reproducible using Kaggle Kernels. This workshop will include hands-on instruction and best practices for each of the three components necessary for completely reproducible research.

I do, We do, You Do: Supporting active learning with notebooks


The gradual release of responsibility instructional model (also known as the I do, We do, You do model) is a pedagogical technique developed by Pearson & Gallagher where students engage with material more independently over time. In this workshop, participants will learn how to apply the I do, We do, You do framework to teaching with Jupyter notebooks. Over the course of the workshop, participants will complete a series of exercises designed to help them use Jupyter notebooks more effectively support active learning in the classroom.