top of page
Search


MLIR: Crossing the CUDA Moat
Market Dominance By the end of 2025, news outlets reported that Nvidia had 92% of the GPU market. This is a significant climb from 2014, when their share sat at 65%. This dominance wasn't built on hardware alone; it was secured by CUDA and a relentless pace of hardware iteration. CUDA allows Nvidia to roll out new architectures while maintaining seamless backward compatibility across generations. Since 2023, Nvidia has widened its lead by adding specialized support for dive
Jan 28


Why liquid neural networks are interesting and you should pay attention.
Liquid neural networks (LNN) was created by MIT researchers Ramin Hasani and Daniela Rus. In the era where OpenAI is creating models with 1 trillion parameters, LNN shows potential to drastically reduce parameter count. Instead of trillions of parameters, models might be able to deliver the same accuracy and performance with millions of parameters. One significant difference between transformers and LNN is that the weights aren’t static. Classic Transformer models like chatGP
Jan 16


What Is Gradient Descent? A Practical Explanation
Peter Lin Gradient descent is one of those machine learning terms that gets mentioned everywhere, yet often feels abstract or intimidating to people who aren’t deep in the field. In this article, we break it down in practical terms—what gradient descent is, why it matters, and how it actually works inside modern machine learning models. This explanation is based on a conversation from the Practical AI podcast with hosts Jeff and Peter. Why Gradient Descent Exists At its core
Dec 18, 2025


Why "Brute Force" Sucks for Training Models
Jeff Northrup There is a saying when learning to read music: “The long way is the short way.” This implies that there is no short way–that the ONLY way is to use “brute force” to learn hard things. But what if the long way sucks? What if there is a short way to learn something equally well? In Machine Learning, the brute force method involves ever-increasing amounts of data, time, money and energy. The scenario sometimes looks like this: You are training the 5th model to acco
Nov 13, 2025
bottom of page
