Artwork

A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

1:36:28
 
Megosztás
 

Manage episode 474816998 series 2803422
A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility.

SPONSOR MESSAGES:

***

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/

***

Eiso Kant:

https://x.com/eisokant

https://poolside.ai/

TRANSCRIPT:

https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0

TOC:

1. Foundation Models and AI Strategy

[00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development

[00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision

[00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs

2. Reinforcement Learning and Model Economics

[00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches

[00:22:06] 2.2 Model Economics and Experimental Optimization

3. Enterprise AI Implementation

[00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure

[00:26:00] 3.2 Enterprise-First Business Model and Market Focus

[00:27:05] 3.3 Foundation Models and AGI Development Approach

[00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements

4. LLM Architecture and Performance

[00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization

[00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs

[00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons

[00:43:26] 4.4 Balancing Creativity and Determinism in AI Models

[00:50:01] 4.5 AI-Assisted Software Development Evolution

5. AI Systems Engineering and Scalability

[00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges

[00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends

[01:01:25] 5.3 Distributed Systems and Engineering Complexity

[01:01:50] 5.4 GenAI Architecture and Scalability Patterns

[01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation

6. AI Safety and Future Capabilities

[01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches

[01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems

[01:16:27] 6.3 AI vs Human Capabilities in Software Development

[01:33:45] 6.4 Enterprise Deployment and Security Architecture

CORE REFS (see shownotes for URLs/more refs):

[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)

[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)

[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)

[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)

[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)

[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)

[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

  continue reading

232 epizódok

Artwork
iconMegosztás
 
Manage episode 474816998 series 2803422
A tartalmat a Machine Learning Street Talk (MLST) biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Machine Learning Street Talk (MLST) vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility.

SPONSOR MESSAGES:

***

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/

***

Eiso Kant:

https://x.com/eisokant

https://poolside.ai/

TRANSCRIPT:

https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0

TOC:

1. Foundation Models and AI Strategy

[00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development

[00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision

[00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs

2. Reinforcement Learning and Model Economics

[00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches

[00:22:06] 2.2 Model Economics and Experimental Optimization

3. Enterprise AI Implementation

[00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure

[00:26:00] 3.2 Enterprise-First Business Model and Market Focus

[00:27:05] 3.3 Foundation Models and AGI Development Approach

[00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements

4. LLM Architecture and Performance

[00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization

[00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs

[00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons

[00:43:26] 4.4 Balancing Creativity and Determinism in AI Models

[00:50:01] 4.5 AI-Assisted Software Development Evolution

5. AI Systems Engineering and Scalability

[00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges

[00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends

[01:01:25] 5.3 Distributed Systems and Engineering Complexity

[01:01:50] 5.4 GenAI Architecture and Scalability Patterns

[01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation

6. AI Safety and Future Capabilities

[01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches

[01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems

[01:16:27] 6.3 AI vs Human Capabilities in Software Development

[01:33:45] 6.4 Enterprise Deployment and Security Architecture

CORE REFS (see shownotes for URLs/more refs):

[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)

[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)

[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)

[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)

[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)

[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)

[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

  continue reading

232 epizódok

Wszystkie odcinki

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv

Hallgassa ezt a műsort, miközben felfedezi
Lejátszás