LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

TBH, every time the AI fails, I mentally blame it on you.

Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution?

"Ignore the parser counter and implement a separate counter."

At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce.

So I can't help but wonder...

Who it learned that from?

I blame it on you

TBH, every time the AI fails, I mentally blame it on you. Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution? "Ignore the parser counter and implement a separate counter." At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce. So I can't help but wonder... Who it learned that from? I blame it on you

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Tue Dec 23 17:20:19
TBH, every time the AI fails, I mentally blame it on you.

Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution?

"Ignore the parser counter and implement a separate counter."

At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce.

So I can't help but wonder...

Who it learned that from?

I blame it on you

TBH, every time the AI fails, I mentally blame it on you. Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution? "Ignore the parser counter and implement a separate counter." At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce. So I can't help but wonder... Who it learned that from? I blame it on you

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Tue Dec 23 17:20:19
TBH, every time the AI fails, I mentally blame it on you.

Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution?

"Ignore the parser counter and implement a separate counter."

At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce.

So I can't help but wonder...

Who it learned that from?

I blame it on you

TBH, every time the AI fails, I mentally blame it on you. Right now GPT-5.2 noticed that the parser was counting variables incorrectly, causing a linearity bug. The solution? "Ignore the parser counter and implement a separate counter." At this point, this isn't about being dumb. This is about making a bad decision that under no circumstances would be good. Either we remove the parser counter and use a separate function as the source of truth, or we keep it, and fix it. But such insane duct taping has no place in a serious codebase, and that idea would never have occurred to an intelligence evolved to learn coding from a pure blank state. It must have been corrupted by evil forces that only humans can produce. So I can't help but wonder... Who it learned that from? I blame it on you

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Tue Dec 23 17:20:19
Another day of AI being a net loss. I'm a bit sad today. Usually I'd just sit down and code, but AI exists, and that crossed my motivation threshold. Instead, I just kept asking it to do a task that I know is too hard for it. Expectedly, it failed, and I did nothing useful today

Another day of AI being a net loss. I'm a bit sad today. Usually I'd just sit down and code, but AI exists, and that crossed my motivation threshold. Instead, I just kept asking it to do a task that I know is too hard for it. Expectedly, it failed, and I did nothing useful today

just to be clear, this is obv my own fault

avatar for Taelin
Taelin
Mon Dec 22 22:30:02
I don't understand why the (Anthropic | OpenAI) + (Cerebras | Groq | SambaNova) partnership isn't a thing yet. Whatever is going under the hood, everyone is losing here.

Imagine Opus 4.5 running at 1000 tokens/s?

I don't understand why the (Anthropic | OpenAI) + (Cerebras | Groq | SambaNova) partnership isn't a thing yet. Whatever is going under the hood, everyone is losing here. Imagine Opus 4.5 running at 1000 tokens/s?

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Sun Dec 21 14:20:26
apparently it also managed, in another tab, to fully port parallel-mode from the old HVM to the new one?

that is a COLOSSAL task. I didn't even bother to try asking an AI before, because it shouldn't work. back on my mind I knew it should take days. but it did in... an hour?

seems like this model just keep working and never loses track

crazy times

apparently it also managed, in another tab, to fully port parallel-mode from the old HVM to the new one? that is a COLOSSAL task. I didn't even bother to try asking an AI before, because it shouldn't work. back on my mind I knew it should take days. but it did in... an hour? seems like this model just keep working and never loses track crazy times

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Sat Dec 20 03:18:47
  • Previous
  • 1
  • 2
  • 3
  • More pages
  • 18
  • 19
  • Next