History of ML in computer vision and medical
Before the rise of "deep learning" in 2013, feature-based ML was the dominating method in respective domains. Classic classifiers like LDA, QDA, and a k-nearest neighbor classifier …
Before the rise of "deep learning" in 2013, feature-based ML was the dominating method in respective domains. Classic classifiers like LDA, QDA, and a k-nearest neighbor classifier (k-NN) were employed for classification before 1980, even when the phrase "machine learning" did not exist. Rumelhart and Hinton  proposed MLP in 1986. The first neural network (NN) research boom occurred in the 1960s, so the MLP ushered in the second. Vapnik presented an SVM  in 1995, quickly becoming the most popular classifier due partly to openly released code on the Web. Random forests by Ho et al. in 1995 and dictionary learning by Mairal et al. in 2009 are two examples of machine learning algorithms that have been suggested.
Before the phrase "deep learning" was coined, numerous machine learning approaches utilizing image input were presented. It all started with Fukushima's Neocognitron in 1980. LeCun et al. presented a CNN in 1989, refining the Neocognitron. Suzuki et al. applied a convolutional MLP to cardiac images in 1994. Suzuki et al. introduced neural filters based on a modified MLP to lower noise in photos two years later, then neural edge enhancers in 2000. Hinton et al. proposed the deep brief network (DBN) in 2006 and a year later used the phrase "deep learning." Until late 2012, deep learning was primarily unknown. A CNN won the ImageNet competition in late 2012. Deep architecture is possible using Neocognitron, MLP, CNN, neural filters, MTANN, and DBN.
Currently, the term "deep learning," which is, to be precise, ML with image input (image-based ML) and deep architecture, does not offer new ML models but instead is essentially a collection of earlier work on ML (namely, ML with image input) that was recently recognized again with different terminology.