When Does Machine Learning Become Deep Learning?

Machine learning and deep learning have flourished over the last several years. With computing power rapidly becoming cheaper and more accessible, implementing learning algorithms is easier than ever. The lower barrier to entry, however, has simultaneously ushered in misunderstanding of these technologies. Specifically, the relationship between machine learning and deep learning is often poorly understood. Most detrimental from this misunderstanding is the misuse of deep learning methodologies. To discuss this misuse though, the difference between machine learning and deep learning must first be clarified.

Machine learning describes quantitative processes that use computing systems to learn patterns in data. For example, consider the task of identifying komodo dragons in images. At the simple extrema, a machine learning model could ingest the width and height of a recognized object and label the object as dragon or non-dragon, based on width divided by height. This would probably lack accuracy though. A slightly more complex model could break our image into several sections. With each section detailing some feature of the original image, a model could search for tongue-like and eye-like feature tiles to label the image. Even more complexity might be required to distinguish between komodo dragons and let’s say snakes. Rather than only looking at feature tiles, now remember where each feature tile occurred relative to other feature tile positions. Tracking this information gives us a new internal representation of our original data. If the new representation contains tongue-like, eye-like, and ear-like feature tiles in appropriate relative positions, the image is labeled as a komodo dragon. These internal representations are at the heart of deep learning. Machine learning becomes “deep” when a model gains high levels of complexity through learning internal representations of data, that inform subsequent new representations based on the previous representations, and so on. The less defined area is how many layers of internal representations are needed to constitute deep learning. Regardless, more layers, and therefore representations, increases the potential for models to gain predictive power.

Given the relationship between deep learning and model complexity, are there situations where added complexity is misused? Misuse can appear when using the wrong tool for the task presented. Specifically, not all tasks require models of the highest complexity. Rather than detecting komodo dragons, if the task was to differentiate images of red squares and blue squares, only information about the presence of red pixels is necessary. A much simpler algorithm than what deep learning provides could achieve this task. In essence, you only need a bandage as large as your cut; anything larger would be a waste of resources. Knowing when to use what complexity of algorithm is key to any data science solution.

Previous
Previous

Choosing Metrics for a Product Launch

Next
Next

Artificial Intelligence (AI) vs. Machine Learning (ML)