Learning Machines 101

Advertise on podcast: Learning Machines 101

Rating
4.4
from
91 reviews
This podcast has
85 episodes
Language
Date created
2014/06/24
Average duration
33 min.
Release period
87 days

Description

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Podcast episodes

Check latest episodes from Learning Machines 101 podcast


LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes
2021/07/20
This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to study statistical machine learning, then we must be able to discuss how to represent and compute the probability of an environmental event. It is essential that we have methods for communicating probability concepts to other researchers, methods for calculating probabilities, and methods for calculating the expectation of specific environmental events. This episode discusses the challenges of assigning probabilities to events when we allow for the case of events comprised of an infinite number of outcomes. Along the way we introduce essential concepts for representing and computing probabilities using measure theory mathematical tools such as sigma fields, and the Radon-Nikodym probability density function. Near the end we also briefly discuss the intriguing Banach-Tarski paradox and how it motivates the development of some of these special mathematical tools. Check out: www.learningmachines101.com and www.statisticalmachinelearning.com for more information!!!
more
LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges
2021/05/21
This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. The material in this podcast is designed to provide an overview of Chapter 7 of my new book “Statistical Machine Learning” and is based upon material originally presented in Episode 68 of Learning Machines 101! Check out: www.learningmachines101.com for the show notes!!!  
more
LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems
2021/01/05
In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed. Learn more by visiting: www.learningmachines101.com and www.statisticalmachinelearning.com .
more
How to Use Calculus to Design Learning Machines
2020/08/29
This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule for deriving deep learning descent algorithms and how it is relevant to software implementations of deep learning algorithms.  We also discuss how matrix Taylor series expansions are relevant to machine learning algorithm design and the analysis of generalization performance!! For additional details check out: www.learningmachines101.com and www.statisticalmachinelearning.com  
more
How to Analyze and Design Linear Machines
2020/07/23
The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning. Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use linear algebra to analyze and design such machines. In addition, these same techniques are fundamentally important for the development of techniques for the analysis and design of nonlinear machines.  This podcast provides a brief overview of Linear Algebra for Machine Learning for the general public as well as information for students and instructors regarding the contents of Chapter 4 of Statistical Machine Learning. For more details, check out: www.statisticalmachinelearning.com
more
How to Define Machine Learning (at at Least Try)
2020/04/09
This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph which is implicitly specified by the objective function. In addition, this week we include in our book review section a new book titled “The Practioner’s Guide to Graph Data”  by Denise Gosnell and Matthias Broecheler. To find out more information visit the website: www.learningmachines101.com .
more
How to Represent Knowledge using Set Theory
2020/02/29
This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.
more
How to View Learning as Risk Minimization
2019/12/24
This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but also for suggesting new algorithms for specific applications.
more
How to become a machine learning expert
2019/10/24
This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com
more
How to Choose the Best Model using BIC
2019/05/02
In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given the training data and then use this analysis to support selecting the most probable model, selecting a model that minimizes expected risk, and support Bayesian model averaging. The assumptions which are required for BIC to be a valid approximation for the probability of the training data given the probability model are also discussed.
more
How to Choose the Best Model using AIC and GAIC
2019/01/23
In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data.  The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provided for the AIC and GAIC to be valid, and explicit formulas are provided for the AIC and GAIC so they can be used in practice. Briefly, AIC and GAIC provide a way of estimating the average prediction error of your learning machine on test data without using test data or cross-validation methods. The GAIC is also called the Takeuchi Information Criterion (TIC).
more
Can computers think? A Mathematician's Response (remix)
2018/12/12
In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the memory of my mom, Sandy Golden. To learn more about Turing Machines, SuperTuring Machines, Hypercomputation, and my Mom, check out: www.learningmachines101.com
more

Podcast reviews

Read Learning Machines 101 podcast reviews


4.4 out of 5
91 reviews
Greg Cathcart 2019/02/02
An important resource for Machine Learning
Richard Golden provides an upbeat and positive introduction to Machine Learning. Whether you are a beginner in the field or an advanced practitioner,...
more
jay birbeck 2019/01/12
I’ll be back.
Very freaky, and it’s hard to follow along, it makes me feel very futile. And it was also very strange how I found out about this.they called didn’t a...
more
Carlos Leonidas 2018/07/12
Great for those interesting in machine learning.
This podcast is a great intoduction to the field of Machine Learning from statistics angle.
emily.learning 2018/07/01
The real deal
This guy is the real deal. Check out his google scholar page, 1000s of citations on machine learning books/articles dating back to the 80s. He is pa...
more
Self-improvement junkie 2018/06/21
Fun Presentation of Machine Learning
I’m considering having my kids listen to these episodes. The machine learning facts are presented in a really accessible way, I think.
JohnAbraham91 2018/06/06
Very useful content; Clear explanations
I have listened to a lot of Learning Machines 101 episodes. Each episode picks an interesting topic in the ML world and explains it in a way an intere...
more
OrvilleMB 2018/04/21
Learning Machines 101
I've just finished listening to the first podcast in this series and reviewing some of the material on the LM-101 website. If future podcasts and the...
more
Tallowa 2018/01/17
Best machine learning podcast!
I really enjoy this podcast. It is one of the best resources online to start learning about machine learning. Dr Golden really explains these conce...
more
atuljain70 2017/04/19
Excellent starter on machine learning
I just started in ML. Lucky to found this excellent podcast. Finished it in five days during my morning walks and commute. Have very good pointers for...
more
addy1746529 2017/02/09
Clear and logical
Dr Golden has very clear and logical explanations for this very interesting subject. I actually found this podcast a couple years ago, but it didn't s...
more
check all reviews on aple podcasts

Podcast sponsorship advertising

Start advertising on Learning Machines 101 & sponsor relevant audience podcasts


What do you want to promote?

Ad Format

Campaign Budget

Business Details