A Quick Guide to Machine Learning (ML)

Machine Learning explained, whether you're a beginner or a corporate enterprise practitioner

Add bookmark
Seth Adler
Seth Adler
05/13/2019

machine-learning-guide

What is machine learning?

Artificial intelligence (AI) and machine learning (ML) are terms that are often used interchangeably in data science, though they aren’t the exact same thing. Machine learning is a subset of AI that believes that data scientists should give machines data and allow them to learn on their own. Machine learning uses neural networks, a computer system modeled after how the human brain processes information. It is an algorithm designed to recognize patterns, calculate the probability of a certain outcome occurring, and “learn” through error and successes using a feedback loop. Neural networks are a valuable tool, especially for neuroscience research. Deep learning, another term for neural networks, can establish correlations between two things and learn to associate them with each other. Given enough data to work with, it can predict what will happen next.

 

Supervised and unsupervised learning

There are two frameworks of ML: supervised learning and unsupervised learning. In supervised learning, the learning algorithm starts with a set of training examples that have already been correctly labeled. The algorithm learns the correct relationships from these examples and applies these learned associations to new, unlabeled data it is exposed to. In unsupervised learning, the algorithm starts with unlabeled data. It is only concerned with inputs, not outputs. You can use unsupervised learning to group similar data points into clusters and learn which data points have similarities. In unsupervised learning, the computer teaches itself, whereas in supervised learning, the computer is taught by the data. With the introduction of Big Data, neural networks are more important and useful than ever to be able to learn from these large datasets.

Deep learning is usually linked to artificial neural networks (ANN), variations that stack multiple neural networks to achieve a higher level of perception. Deep learning is being used in the medical field to accurately diagnoses of more than 50 eye diseases.

 

Uses of machine learning

Predictive analytics is composed of several statistical techniques, including machine learning, to estimate future outcomes. It helps to analyze future events based on what outcomes from similar events in the past. Predictive analytics and machine learning go hand in hand because the predictive models used often include a machine learning algorithm. Neural networks are one of the most widely used predictive models.  

Cognitive Computing is the blanket term used for receiving data, analyzing it, and building actionable insights from that data much like the human brain would. Big Data, cloud computing, and machine learning all fall under cognitive computing.

Because business is all about solving the same problem with different targets, products, or services, creating one flexible machine learning model that is able to repeat tasks is imperative.

 


Dr Andy Pardoe, REF Global Development Manager, Credit Suisse talks about building a strategy for machine learning.

Source: AIIA Network Events: AI for Enterprise Summit

 

Machine learning tools

Computational learning theory (COLT) makes predictions based on past data. This is applicable in today’s machine learning environment because it helps the user define useful data and avoid irrelevant data, which speeds up the machine learning process and decreases the chance for incorrect outputs. Data isn’t just computed, but with CLT, patterns are recognized and rules are developed, such as how many training examples are necessary or how much time a problem will take to solve.

Pattern recognition is a tool used by machine learning to define and refine algorithms. It can operate with tangible patterns through computer vision, which receives inputs from the visual world and reacts to them. As it relies completely on data, pattern recognition is utilized to present data and theoretical predictions which are then built upon by other branches of machine learning. Pattern recognition in technology gained momentum in the 1970s and opened the door to heuristic and decision tree methods.

Pattern recognition is only viable in the context of machine learning if it can identify patterns quickly and accurately and is able to correctly identify a partially covered image or an object from several angles. Such applications for this type of pattern recognition include autonomous vehicles and cancer screening, wherein the patterns are detected by pattern recognition technology but acted upon by the broader scope of human or artificial intelligence. As so, the term pattern recognition is being used less often and instead falls under the broader scope of machine learning and deep learning.

Cluster analytics, or clustering, is a mechanism of machine learning that groups data sets with similar characteristics together. Clustering can utilize multiple different algorithms and parameters when going on its fact-finding mission, which often leads to different data sets. These algorithms are useful when grouping the same data set into different clusters. These clusters then act as data sets in other machine learning capacities, such as computer vision, natural language processing, and data mining. While cluster learning can be helpful in identifying target groups, its power goes beyond simple surveying. It currently operates as a predictive tool in cybersecurity, as it is able to cluster and identify malicious URLs and spam keywords.

Clustering falls under unsupervised learning and can also be used on its own for data distribution insights, such as how certain demographics poll politically. This data is then fed into other algorithms, such as marketing to target political demographics.

Clustering is currently being leveraged in this way by deep learning conventions.

 


Listen to Lee Coultier, Ascension Shared Services' CEO and Chair, discuss Machine Learning.

Source: The AIIA Network Podcast

 

Metaheuristic algorithms

Because ML is only beneficial if the time it takes to compute delivers a return on investment, metaheuristic algorithms were developed to cut down on the computational time of an algorithm. While precision is sometimes sacrificed, a general answer computed in a short time frame is sufficient for certain use cases. A heuristic is a machine learning shortcut that arrives at approximations in the case of undefinable exact solutions or for the sake of time management, prioritizing speed over perfection. In a heuristic, branch paths are weighted so time isn’t spent traveling down every branch repeatedly for the sake of generating new data and coming to precise solutions. Instead, a heuristic works on preset conditions, such as a time limit or an estimation based on a smaller dataset. For example, a heuristic could be defined to count the blue crayons in a crayon box. Its estimation would include sky blue, royal blue, and cerulean, which are perfectly acceptable parameters, although not exact.

 

Automated machine learning

Historically, machine learning was a timely and expensive process reserved for the large corporations and organizations who could afford data scientists, mathematicians, and engineers. As it has evolved, systems such as autonomic computing and automated machine learning have driven down the complexity and costs of machine learning processes through third-party software.

No longer is it necessary for data scientists to create complicated algorithms on a case-by-case basis for the execution of machine learning. Much like how HTML can be coded through simpler block-based tools, increasing its accessibility and usability to the layperson, automated machine learning provides the building blocks of machine learning with model presets. The user then plugs in the appropriate data categories. Those categories get automatically populated, and the model is able to build on itself and act accordingly and in real time with adaptive algorithms. Adaptive algorithms further enhance this process by including new data into output calculations. The algorithms shift and refine themselves based on new input. For example, Google Maps darkens in a tunnel or at night when its computer vision receives data that the environment is dark. Because of its ability to process data as it comes in and give less weight to old or irrelevant data, adaptive algorithms are also being used by automated stock trading software.

 

Reinforcement learning

Reinforcement learning is a technique applied to automated machine learning (AML) and is a cousin of unsupervised learning and supervised learning. Where unsupervised learning provides output based on undefined inputs and supervised learning utilizes labeled data sets, reinforcement learning repeats processes, abandoning paths that lead to negative reinforcement, and refines paths that lead to positive reinforcement. In other words, they can practice and experiment toward the goal of a certain outcome, constantly refining and optimizing its technique at a phenomenal speed.

The actor, action, and environment in reinforcement learning is defined, but the optimal path is not. Reinforcement learning combined with deep learning is how machine learning and artificial intelligence programs learn how to beat human chess pros. Real-world applications include targeted ads that increase click-through rates.

IT issues have grown in complexity with the advancement of technology. Because of the utilization of complicated machine learning and artificial intelligence principles, IT departments are at risk of performance slowdowns and human error that go undetected in the system’s hardware and software. With reinforcement learning as its base, autonomic computing solves these dilemmas by using technology to manage technology.

In autonomic computing, machine learning and reinforcement learning is used to enable a "self-"system. Examples of self-systems are self-protected systems, which automates cybersecurity; or self-healing systems, which are able to perform actions such as downloading patches, deleting malware, or defragment a hard drive. Autonomic computing is designed to operate like the human nervous system, running in the background to monitor, fix, and maintain the programs and operating systems that technology depends on.

 

Action model learning

Action model learning is a type of reinforcement learning where previously existing algorithmic models are utilized. Where reinforcement learning runs the race quickly, learning from successes and failures through trial and error, action model learning is more “thoughtful” in that it can reason from new knowledge and predictive analytics, allowing it to take educated shortcuts to the finish line.

Predictive analytics uses historical data to predict the future. Pattern recognition reorganizes data based on like characteristics to enhance the accuracy of predictive analytics. Using an eCommerce example, predictive analytics observes umbrellas being purchased during the rainy season. Action model learning can take this knowledge and apply it to online advertising by populating ads for umbrellas based off the weather forecast. Manually customizing ads in this way is timely, and with the scope of the eCommerce world, nearly impossible.

 

Conclusion

The scope and definition of machine learning are constantly evolving with technology. As new applications and resources are developed to deploy the power of machine learning, its accessibility and utilization in the broader population continues to be observed, assessed, and refined.

 

References

A Beginner’s Guide to Neural Networks and Deep Learning [webpage]. (n.d.). Skymind.

A Beginner's Guide to Deep Reinforcement Learning [webpage]. (n.d.). Skymind.

An architectural blueprint for autonomic computing [white paper]. (2005). www-03.ibm.com.

Ansari, S. (n.d.). Pattern Recognition | Introduction. geeksforgeeks.org.

Banafa, A. (2016, July 14). What is Autonomic Computing? bbvaopenmind.com

Banerjee, S. (2018, October 1). Difference between Algorithms and Heuristic. medium.com

Bavoria, V. (2018, March 21). Cluster Analysis Will Power Up Cognitive Computing. i-runway.com.

Certicky, M. (2014, August). Real-Time Action Model Learning with Online Algorithm 3SG. researchgate.net.

Dean, J. (2018, January 11). The Google Brain Team — Looking Back on 2017. ai.googleblog.com.

Deep Learning vs. Machine Learning vs. Pattern Recognition. (2017, September 14). www.alibabacloud.com.

Evans, D. (2017, March 28). Cognitive Computing vs Artificial Intelligence: what’s the difference? iq.intel.co

Greene, T. (2018, July 17). A beginner’s guide to AI: Computer vision and image recognition. thenextweb.com.

Gupta, A. Machine Learning Algorithms in Autonomous Driving. (n.d.). iiot-world.com

Hardesty, L. (2017, April 14). Explained: Neural networks. MIT News.

Li, H. (n.d.). Short Term Forcasting of Financial Market Using Adaptive Learning in Neural Network. worldcomp-proceedings.com

Loshin, D. (2017, April). Three examples of machine learning methods and related algorithms. searchenterpriseai.techtarget.com/

Marr, B. (2016, December 6). What Is The Difference Between Artificial Intelligence And Machine Learning? Forbes

Marr, B. (2016, March 23). What Everyone Should Know About Cognitive Computing. Forbes

Reinforcement Learning Explained: Overview, Comparisons and Applications in Business [webpage]. (2019, January). altexoft.com.

Sarkar, T. (2018, October 26). How to analyze “Learning”: Short tour of Computational Learning Theory. Towards Data Science

Siva, C. (2018, November 30). Machine Learning and Pattern Recognition. dzone.com

Wakefield, K. (n.d.). Predictive analytics and machine learning. SAS.

What is Cognitive Computing? 5 Ways to Make Your Business More Intelligent [webpage]. (2017, October 16). newgenapps.com.

Yadav, A. (2019, January 15). https://tdwi.org/articles/2019/01/15/adv-all-rise-of-automated-machine-learning.aspx. twdi.org.


RECOMMENDED