If you had data of the entire world at your feet, would you still build a deep learning model on it?
Suppose you solved a crucial business problem by building a complex deep neural network model to generate predictions on a new set of business data.
Deploying and maintaining this model costs your business a fortune, but the decision-makers consider it a small price to pay for a greater good.
It is all well and good. But could you have optimized the cost by opting for a simple regression model instead of a deep neural network?
Of course, you tried regression as a baseline model, and you went for a deep neural network only after you noticed that it outperformed the regression model by miles.
But if you had to suddenly build this model again with an enormous volume of training data, would you notice any difference in the performance metrics of the baseline and the complex models?
Or, for that matter, would an increase in the volume of training data improve the performance of a baseline model and make it on par with a complex model?
Two Microsoft researchers, Michele Banko and Eric Brill tried to find an answer to a slightly similar problem as yours.
They were trying to develop an improved grammar checker, especially the one that could help a person correctly choose a word for a sentence from a list of similar ones.
As an example, imagine the task where one needs to choose the correct word out of the following list: to, too, and two, for completing the below sentence.
“He has __ hands.”
This problem is usually known as confusion set disambiguation as we are trying to solve the ambiguity of a confusing set of words.
Banko and Brill asked themselves how to approach this task to attain the highest performance improvement.
The possibilities that came to their mind were tweaking the existing algorithms, exploring new learning techniques, and using sophisticated features.
But since all of these were expensive options, they decided to try and see what happens when they tried existing methods with a large amount of training data.
Hence, they chose four existing learners — winnow, perceptron, naive-Bayes, and memory-based learner to perform this task.
They collected a training corpus of one billion words from numerous news articles, English literature, scientific texts, etc.
To keep test set data different from the training set, they collected one million words separately from the wall street journal texts.
They later trained each of the four learners at different cutoff points in the corpus, i.e., the first one million words, the first five million words, and so on, until they used all billion words for training.
They published the results of their experimentation in a paper titled “Scaling to Very Very Large Corpora for Natural Language Disambiguation” in 2001.
It consisted of the graph depicted below, where three learners out of four produce almost the same accuracy when trained upon a corpus of billion words.

Without ignoring the caveats and practicalities of this research, sometimes it might be helpful to ask ourselves if adding more data could improve the baseline model’s performance.
The authors wrote in the conclusion of their paper as below:
“We propose that a logical next step for the research community would be to direct efforts towards increasing the size of annotated training collections while deemphasizing the focus on comparing different learning techniques trained only on small training corpora. “
Of course, we cannot ignore that increasing the volume of training data has its own cost and memory constraints. But when feasible, it might be an effective way to approach the model building.
Comments
Post a Comment