Artwork

A tartalmat a Real Python biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Real Python vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Large Language Models on the Edge of the Scaling Laws

1:28:34
 
Megosztás
 

Manage episode 504635559 series 2637014
A tartalmat a Real Python biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Real Python vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

What’s happening with the latest releases of large language models? Is the industry hitting the edge of the scaling laws, and do the current benchmarks provide reliable performance assessments? This week on the show, Jodie Burchell returns to discuss the current state of LLM releases.

The most recent release of GPT-5 has been a wake-up call for the LLM industry. We discuss how the current scaling of these systems is reaching a diminishing edge. Jodie also shares how many AI model assessments and benchmarks are flawed. We also take a sober look at the productivity gains from using these tools for software development within companies.

We discuss how newer developers should consider additional factors when looking at the current job market. Jodie digs into how economic changes and rising interest rates are influencing layoffs and hiring freezes. Then we share a wide collection of resources for you to continue exploring these topics.

This episode is sponsored by Influxdata.

Course Spotlight: Exploring Python Closures: Examples and Use Cases

Learn about Python closures: function-like objects with extended scope used for decorators, factories, and stateful functions.

Topics:

  • 00:00:00 – Introduction
  • 00:03:00 – Recent conferences and talks
  • 00:04:18 – What’s going on with LLMs?
  • 00:06:06 – What happened with the GPT-5 release?
  • 00:08:14 – Simon Willison - 2025 in LLMs so far
  • 00:09:00 – How did we get here?
  • 00:10:37 – OpenAI’s and scaling laws
  • 00:12:25 – Pivoting to post-training
  • 00:16:01 – Some history of AI eras
  • 00:17:54 – Issues with measuring performance and benchmarks
  • 00:22:19 – Chatbot Arena
  • 00:24:06 – Languages are finite
  • 00:26:22 – LLMs and the illusion of humanity
  • 00:30:41 – Sponsor: Influxdata
  • 00:31:34 – Types of solutions to move past these limits
  • 00:36:57 – Does AI actually boost developer productivity?
  • 00:44:19 – Agentic Al Programming with Python
  • 00:48:02 – Results of non-programmers vibe coding
  • 00:50:18 – Back to the concept of overfitting
  • 00:52:52 – The money involved in training
  • 00:56:50 – Video Course Spotlight
  • 00:58:21 – Deepseek and new methods of training
  • 01:01:02 – Quantizing and fitting on a local machine
  • 01:04:48 – The layoffs and the economic changes
  • 01:10:32 – AI implementation failures
  • 01:21:01 – Don’t doubt yourself as a developer
  • 01:24:06 – What are you excited about in the world of Python?
  • 01:25:39 – What do you want to learn next?
  • 01:26:42 – What’s the best way to follow your work online?
  • 01:27:04 – Thanks and goodbye

Survey:

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas

  continue reading

265 epizódok

Artwork
iconMegosztás
 
Manage episode 504635559 series 2637014
A tartalmat a Real Python biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Real Python vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

What’s happening with the latest releases of large language models? Is the industry hitting the edge of the scaling laws, and do the current benchmarks provide reliable performance assessments? This week on the show, Jodie Burchell returns to discuss the current state of LLM releases.

The most recent release of GPT-5 has been a wake-up call for the LLM industry. We discuss how the current scaling of these systems is reaching a diminishing edge. Jodie also shares how many AI model assessments and benchmarks are flawed. We also take a sober look at the productivity gains from using these tools for software development within companies.

We discuss how newer developers should consider additional factors when looking at the current job market. Jodie digs into how economic changes and rising interest rates are influencing layoffs and hiring freezes. Then we share a wide collection of resources for you to continue exploring these topics.

This episode is sponsored by Influxdata.

Course Spotlight: Exploring Python Closures: Examples and Use Cases

Learn about Python closures: function-like objects with extended scope used for decorators, factories, and stateful functions.

Topics:

  • 00:00:00 – Introduction
  • 00:03:00 – Recent conferences and talks
  • 00:04:18 – What’s going on with LLMs?
  • 00:06:06 – What happened with the GPT-5 release?
  • 00:08:14 – Simon Willison - 2025 in LLMs so far
  • 00:09:00 – How did we get here?
  • 00:10:37 – OpenAI’s and scaling laws
  • 00:12:25 – Pivoting to post-training
  • 00:16:01 – Some history of AI eras
  • 00:17:54 – Issues with measuring performance and benchmarks
  • 00:22:19 – Chatbot Arena
  • 00:24:06 – Languages are finite
  • 00:26:22 – LLMs and the illusion of humanity
  • 00:30:41 – Sponsor: Influxdata
  • 00:31:34 – Types of solutions to move past these limits
  • 00:36:57 – Does AI actually boost developer productivity?
  • 00:44:19 – Agentic Al Programming with Python
  • 00:48:02 – Results of non-programmers vibe coding
  • 00:50:18 – Back to the concept of overfitting
  • 00:52:52 – The money involved in training
  • 00:56:50 – Video Course Spotlight
  • 00:58:21 – Deepseek and new methods of training
  • 01:01:02 – Quantizing and fitting on a local machine
  • 01:04:48 – The layoffs and the economic changes
  • 01:10:32 – AI implementation failures
  • 01:21:01 – Don’t doubt yourself as a developer
  • 01:24:06 – What are you excited about in the world of Python?
  • 01:25:39 – What do you want to learn next?
  • 01:26:42 – What’s the best way to follow your work online?
  • 01:27:04 – Thanks and goodbye

Survey:

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas

  continue reading

265 epizódok

Alle Folgen

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv

Hallgassa ezt a műsort, miközben felfedezi
Lejátszás