Alex Markham is completing their Postdoc in the Math of Data and AI group at KTH Royal Institute of Technology in Sweden. Their research focuses on developing new algorithms for learning causal models from data. Causal inference is especially appealing to more applied researchers, because it offers an intuitive framework for reasoning about why stuff happens and how we can influence it to happen differently. Alex finds causal inference especially interesting because of the many different fields it draws from, including philosophy, cognitive science, and methodology, as well as computational and mathematical fields, like machine learning, statistics, graph theory, algebraic geometry, and combinatorics. Episode 73's got it all: math, science and philosophy -- join us for a holistic half hour!
INTRO
Causal Inference
Correlation vs. Causality
THE BRAIN
Neuroimaging & fMRI
Statistics
Time
Variables
Complexity
Brain-Computer Interface (BCI)
Electroencephalography (EEG)
Prosthetics
The Matrix
CAUSALITY
Causal Relationships (Direct, Indirect, Mediated)
The Limits of Probability & Statistics
Extending the Language of Probability
The "Do" Operator
Symmetry of Correlation
"No Causation Without Manipulation"
Randomized Controlled Experimentation
MATHEMATICS
Machine Learning
Dependence & Independence
(Acyclic) Directed Graphs (DAGs) & Colliders
Causal Models
Graph Spaces
///
CONTACT
Alex's Website: causal.dev
My Website: rapyourgift.com
READINGS
Introduction to Causality in Machine Learning by Alexandre Gonfalonieri on Medium: https://towardsdatascience.com/introduction-to-causality-in-machine-learning-4cee9467f06f
/// CLOSING REMARKS
Does free will exist? Maybe. Regardless, please share your cherished feedback with me at abstractcast@gmail.com!
Liking the show? Drop us a juicy 5-star rating or a written review on Apple Podcasts!
Want to support the show? Save your $$$ and support us by Following & Subscribing on: Spotify, Facebook, Instagram & Twitter!
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More