Artwork

A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

1:39:39
 
Megosztás
 

Manage episode 436675796 series 2803422
A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe.

YT version: https://youtu.be/DP454c1K_vQ

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

TOC

00:00:00 Intro

00:03:38 Reasoning

00:13:09 Potential AI Breakthroughs Reducing Computation Needs

00:20:39 Memorization vs. Generalization in AI

00:25:19 Approach to the ARC Challenge

00:29:10 Perceptions of Chat GPT and AGI

00:58:45 Abstract Principles of Jurgen's Approach

01:04:17 Analogical Reasoning and Compression

01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI

01:15:50 Use of LSTM in Language Models by Tech Giants

01:21:08 Neural Network Aspect Ratio Theory

01:26:53 Reinforcement Learning Without Explicit Teachers

Refs:

★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber):

★ Chain Rule For Backward Credit Assignment (Leibniz, 1676)

★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800)

★ First 20th Century Pioneer of Practical AI (Quevedo, 1914)

★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925)

★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34)

★ Unpublished ideas about evolving RNNs (Turing, 1948)

★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958)

★ First Published Learning RNNs (Amari and others, ~1972)

★ First Deep Learning (Ivakhnenko & Lapa, 1965)

★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68)

★ ReLUs (Fukushima, 1969)

★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960)

★ Backpropagation for NNs (Werbos, 1982)

★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988).

★ Metalearning or Learning to Learn (Schmidhuber, 1987)

★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT)

★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990)

★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT)

★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT)

★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber)

★ LSTM journal paper (1997, most cited AI paper of the 20th century)

★ xLSTM (Hochreiter, 2024)

★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015)

★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team)

https://arxiv.org/abs/2305.17066

★ Bremermann's physical limit of computation (1982)

EXTERNAL LINKS

CogX 2018 - Professor Juergen Schmidhuber

https://www.youtube.com/watch?v=17shdT9-wuA

Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997)

https://sferics.idsia.ch/pub/juergen/loconet.pdf

The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy

https://www.youtube.com/watch?v=I4pQbo5MQOs

(Refs truncated, full version on YT VD)

  continue reading

185 epizódok

Artwork
iconMegosztás
 
Manage episode 436675796 series 2803422
A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe.

YT version: https://youtu.be/DP454c1K_vQ

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

TOC

00:00:00 Intro

00:03:38 Reasoning

00:13:09 Potential AI Breakthroughs Reducing Computation Needs

00:20:39 Memorization vs. Generalization in AI

00:25:19 Approach to the ARC Challenge

00:29:10 Perceptions of Chat GPT and AGI

00:58:45 Abstract Principles of Jurgen's Approach

01:04:17 Analogical Reasoning and Compression

01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI

01:15:50 Use of LSTM in Language Models by Tech Giants

01:21:08 Neural Network Aspect Ratio Theory

01:26:53 Reinforcement Learning Without Explicit Teachers

Refs:

★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber):

★ Chain Rule For Backward Credit Assignment (Leibniz, 1676)

★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800)

★ First 20th Century Pioneer of Practical AI (Quevedo, 1914)

★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925)

★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34)

★ Unpublished ideas about evolving RNNs (Turing, 1948)

★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958)

★ First Published Learning RNNs (Amari and others, ~1972)

★ First Deep Learning (Ivakhnenko & Lapa, 1965)

★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68)

★ ReLUs (Fukushima, 1969)

★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960)

★ Backpropagation for NNs (Werbos, 1982)

★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988).

★ Metalearning or Learning to Learn (Schmidhuber, 1987)

★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT)

★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990)

★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT)

★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT)

★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber)

★ LSTM journal paper (1997, most cited AI paper of the 20th century)

★ xLSTM (Hochreiter, 2024)

★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015)

★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team)

https://arxiv.org/abs/2305.17066

★ Bremermann's physical limit of computation (1982)

EXTERNAL LINKS

CogX 2018 - Professor Juergen Schmidhuber

https://www.youtube.com/watch?v=17shdT9-wuA

Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997)

https://sferics.idsia.ch/pub/juergen/loconet.pdf

The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy

https://www.youtube.com/watch?v=I4pQbo5MQOs

(Refs truncated, full version on YT VD)

  continue reading

185 epizódok

Усі епізоди

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv