LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @farguney: @svpino Coding with AI is now a force multiplier. Most developers won’t be replaced by AI alone, but by AI-augmented develope…

RT @farguney: @svpino Coding with AI is now a force multiplier. Most developers won’t be replaced by AI alone, but by AI-augmented develope…

Founder @ https://t.co/AwROlKtFoF

avatar for Faruk Guney
Faruk Guney
Sun Nov 30 20:58:52
You’ll brute-force your way to tens of trillions of parameters that can only run in the cloud, not on the edge, and even that cloud future isn’t guaranteed in a world of data scarcity and tightening access. All that burn, just to ship something fragile, centralized, and brittle. What a spectacularly wasteful business model.

You’ll brute-force your way to tens of trillions of parameters that can only run in the cloud, not on the edge, and even that cloud future isn’t guaranteed in a world of data scarcity and tightening access. All that burn, just to ship something fragile, centralized, and brittle. What a spectacularly wasteful business model.

Founder @ https://t.co/AwROlKtFoF

avatar for Faruk Guney
Faruk Guney
Sat Nov 29 01:27:41
The blind corporate race to “"Artificial General Intelligence (AGI)” without a coherent, testable theory of cognition and intelligence is fundamentally unserious. It’s like trying to build an interstellar starship using only today’s space technology: you can reliably get people to low Earth orbit, maybe the Moon with enough effort, but you’re not doing Proxima Centauri any time soon.

And yet, that “limited” space technology has transformed our world. The same engineering that can’t take us to another star has given us GPS, satellite communications, advanced materials, precision manufacturing, breakthroughs in medical devices, aerospace, automotive safety, and global education. We didn’t get value from fantasies of star travel; we got value from disciplined, grounded progress on what the technology could actually support.

AI is in exactly that phase. We have powerful pattern recognizers, not a scientific understanding of general intelligence. We don’t have a unified theory of cognition that can be cleanly mapped to current compute, architectures, and energy constraints. Pretending otherwise and burning trillions on a marketing narrative of “racing to AGI” is not visionary; it’s wasteful and, in many cases, reckless.

The rational path is clear:

Invest heavily in foundational research on cognition, intelligence, and learning.

Deepen our understanding of the current paradigm instead of overselling its limits.

Use today’s AI to build concrete, high-impact products that create real value for people, societies, and businesses.

We can pursue the equivalent of interstellar travel one day. But right now, the highest leverage is to fully exploit and truly understand the “low orbit” capabilities we already have and let genuine science, not corporate hype, determine when the next paradigm is actually ready.

The blind corporate race to “"Artificial General Intelligence (AGI)” without a coherent, testable theory of cognition and intelligence is fundamentally unserious. It’s like trying to build an interstellar starship using only today’s space technology: you can reliably get people to low Earth orbit, maybe the Moon with enough effort, but you’re not doing Proxima Centauri any time soon. And yet, that “limited” space technology has transformed our world. The same engineering that can’t take us to another star has given us GPS, satellite communications, advanced materials, precision manufacturing, breakthroughs in medical devices, aerospace, automotive safety, and global education. We didn’t get value from fantasies of star travel; we got value from disciplined, grounded progress on what the technology could actually support. AI is in exactly that phase. We have powerful pattern recognizers, not a scientific understanding of general intelligence. We don’t have a unified theory of cognition that can be cleanly mapped to current compute, architectures, and energy constraints. Pretending otherwise and burning trillions on a marketing narrative of “racing to AGI” is not visionary; it’s wasteful and, in many cases, reckless. The rational path is clear: Invest heavily in foundational research on cognition, intelligence, and learning. Deepen our understanding of the current paradigm instead of overselling its limits. Use today’s AI to build concrete, high-impact products that create real value for people, societies, and businesses. We can pursue the equivalent of interstellar travel one day. But right now, the highest leverage is to fully exploit and truly understand the “low orbit” capabilities we already have and let genuine science, not corporate hype, determine when the next paradigm is actually ready.

Founder @ https://t.co/AwROlKtFoF

avatar for Faruk Guney
Faruk Guney
Sat Nov 29 01:10:21
LLMs cannot reliably replace jobs today, but they can dramatically enhance productivity. Powerful agents will be built on top of them, yet they will still require human oversight. True “machine eautonomy” cannot be achieved with current LLM architectures alone. Another major paradigm shift will be necessary before meaningful job replacement becomes possible, and it’s likely already underway in some research labs.

LLMs cannot reliably replace jobs today, but they can dramatically enhance productivity. Powerful agents will be built on top of them, yet they will still require human oversight. True “machine eautonomy” cannot be achieved with current LLM architectures alone. Another major paradigm shift will be necessary before meaningful job replacement becomes possible, and it’s likely already underway in some research labs.

Founder and research engineer @ https://t.co/AwROlKtFoF

avatar for Faruk Guney
Faruk Guney
Fri Nov 28 19:49:58
Humans train autonomous cars in simulation for hundreds of thousands of miles over weeks and months, then throw them onto real streets to gather even more data, retrain them again for weeks and months and after a decade we’re still nowhere near Level 5 autonomy.

A human learns to drive in days, not years. Practically competent in under a week. That’s because humans have extraordinary perception, adaptation, and online learning baked in long before they ever touch a steering wheel.

The truth is, we approached machine autonomy from the wrong starting point.
It’s not primarily a data problem. It’s not just a compute problem either.

The real bottleneck, as it seems to me, is algorithmic understanding, how knowledge is represented, compressed, and processed. Until we rethink that foundation, no amount of data or GPUs will magically produce real autonomy.

Humans train autonomous cars in simulation for hundreds of thousands of miles over weeks and months, then throw them onto real streets to gather even more data, retrain them again for weeks and months and after a decade we’re still nowhere near Level 5 autonomy. A human learns to drive in days, not years. Practically competent in under a week. That’s because humans have extraordinary perception, adaptation, and online learning baked in long before they ever touch a steering wheel. The truth is, we approached machine autonomy from the wrong starting point. It’s not primarily a data problem. It’s not just a compute problem either. The real bottleneck, as it seems to me, is algorithmic understanding, how knowledge is represented, compressed, and processed. Until we rethink that foundation, no amount of data or GPUs will magically produce real autonomy.

Founder @ https://t.co/AwROlKt7z7

avatar for Faruk Guney
Faruk Guney
Fri Nov 28 02:07:26
Either Google is still piping Gemini 1.5 through a thin Gemini 3 Thinking wrapper (even on an AI Ultra account), or people genuinely have no idea how much stronger ChatGPT 5.1 Thinking (Extended) is for science, engineering, coding, and research.

Either Google is still piping Gemini 1.5 through a thin Gemini 3 Thinking wrapper (even on an AI Ultra account), or people genuinely have no idea how much stronger ChatGPT 5.1 Thinking (Extended) is for science, engineering, coding, and research.

Industrial and Systems Engineer (bs, ms), @USC alumni, Former @JohnsHopkins (MS in AI). Past: Founder @ https://t.co/HInAbA9KEW. Now: Founder @ https://t.co/AwROlKt7z7

avatar for Faruk Guney
Faruk Guney
Wed Nov 26 02:44:32
  • Previous
  • 1
  • 2
  • Next