LogoThread Easy
  • 発見
  • スレッド作成
LogoThread Easy

Twitter スレッドの万能パートナー

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

My favorite version of a monk in OSR games is like, replicating the efficiencies of items without items. 

Climb like you had a rope.
Hit like you had a dagger/a sword/a spear.
Deflect like you had a shield.
Endure hunger like you had food.

My favorite version of a monk in OSR games is like, replicating the efficiencies of items without items. Climb like you had a rope. Hit like you had a dagger/a sword/a spear. Deflect like you had a shield. Endure hunger like you had food.

Award-winning TTRPG designer/idiot His Majesty the Worm out now! https://t.co/uM0lozLnxG

avatar for Joshy McCroo - His Majesty the Worm Out Now!
Joshy McCroo - His Majesty the Worm Out Now!
Tue May 14 12:00:03
🧵1/ I am somewhat dismayed (though sadly unsurprised) by this extremely disingenuous and callous reporting about Palestinian death figures. Claims that the UN have revised down its death toll are absolutely untrue but have been repeated by right wing outlets but also the CFR 🤔

🧵1/ I am somewhat dismayed (though sadly unsurprised) by this extremely disingenuous and callous reporting about Palestinian death figures. Claims that the UN have revised down its death toll are absolutely untrue but have been repeated by right wing outlets but also the CFR 🤔

2/ These media are focusing on these two infographics by the @ochaopt . The one on the left, published 6th May, the one on the right published 8th May. As you can see, total fatalities INCREASE, as opposed to decrease. What the propaganda is seaking to do is barely sophistry

avatar for Marc Owen Jones
Marc Owen Jones
Mon May 13 16:19:45
Feliz lunes 🪶. Compartimos la experiencia de una extrabajadora sobre los abusos de la editorial @beetruvian a sus trabajadores y también el cómo estafa a les lectores vendiendo libros traducidos con Google Translate.

Feliz lunes 🪶. Compartimos la experiencia de una extrabajadora sobre los abusos de la editorial @beetruvian a sus trabajadores y también el cómo estafa a les lectores vendiendo libros traducidos con Google Translate.

Es posible que algunos sepáis que hace unos meses surgieron ciertos rumores sobre las traducciones de la editorial Beetruvian. Hoy me gustaría contar mi experiencia como antigua trabajadora de la editorial y confirmar algunos de esos rumores.

avatar for SEGAP🐦
SEGAP🐦
Mon May 13 07:13:47
game design type thread: while obviously what mechanics are best for your game will vary with what you want, i think i've kinda ended up coming up with a solid mechanic for doing body armour that keep reusing because it just works so damn well

lemme explain

game design type thread: while obviously what mechanics are best for your game will vary with what you want, i think i've kinda ended up coming up with a solid mechanic for doing body armour that keep reusing because it just works so damn well lemme explain

basically, i treat armour as two numbers: a "Thickness", which is a value, and a "Coverage", which is a dice target. So X/Y+. Thickness is how thick or resilient the material is. Coverage is how much of the body it actually protects.

avatar for Erika Chappell 🏳️‍⚧️ Professional Simpsonsologist
Erika Chappell 🏳️‍⚧️ Professional Simpsonsologist
Wed May 01 20:42:33
'The Sex FORCE is the GOD FORCE!"🧵

Every time I post on X (Twitter) without fail, I get hit with an OnlyFans Bot Account in my replies. Every SINGLE TIME! Insta-BLOCKED, but it continues, regardless.

"PUSSY IN THE BIO!"

"TITS IN THE BIO!"

I've NEVER seen it this bad.

'The Sex FORCE is the GOD FORCE!"🧵 Every time I post on X (Twitter) without fail, I get hit with an OnlyFans Bot Account in my replies. Every SINGLE TIME! Insta-BLOCKED, but it continues, regardless. "PUSSY IN THE BIO!" "TITS IN THE BIO!" I've NEVER seen it this bad.

We have officially entered the age of the Digital Wasteland; X has become the Cyber Ghettos. Grifters, OnlyFans Scams, it's literally NEVERENDING. As I have emphasized, YOU are controlled by your Sexual Desires. The Sex FORCE is the GOD FORCE. Let's go down the 🐇 🕳.

avatar for Hidden AmuraKa
Hidden AmuraKa
Sun Apr 21 00:07:37
I *WAS* WRONG - $10K CLAIMED!

## The Claim

Two days ago, I confidently claimed that "GPTs will NEVER solve the A::B problem". I believed that: 1. GPTs can't truly learn new problems, outside of their training set, 2. GPTs can't perform long-term reasoning, no matter how simple it is. I argued both of these are necessary to invent new science; after all, some math problems take years to solve. If you can't beat a 15yo in any given intellectual task, you're not going to prove the Riemann Hypothesis. To isolate these issues and raise my point, I designed the A::B problem, and posted it here - full definition in the quoted tweet.

## Reception, Clarification and Challenge

Shortly after posting it, some users provided a solution to a specific 7-token example I listed. I quickly pointed that this wasn't what I meant; that this example was merely illustrative, and that answering one instance isn't the same as solving a problem (and can be easily cheated by prompt manipulation).

So, to make my statement clear, and to put my money where my mouth is, I offered a $10k prize to whoever could design a prompt that solved the A::B problem for *random* 12-token instances, with 90%+ success rate. That's still an easy task, that takes an average of 6 swaps to solve; literally simpler than 3rd grade arithmetic. Yet, I firmly believed no GPT would be able to learn and solve it on-prompt, even for these small instances.

## Solutions and Winner

Hours later, many solutions were submitted. Initially, all failed, barely reaching 10% success rates. I was getting fairly confident, until, later that day, @ptrschmdtnlsn and @SardonicSydney submitted a solution that humbled me. Under their prompt, Claude-3 Opus was able to generalize from a few examples to arbitrary random instances, AND stick to the rules, carrying long computations with almost zero errors. On my run, it achieved a 56% success rate.

Through the day, users @dontoverfit (Opus), @hubertyuan_ (GPT-4), @JeremyKritz (Opus) and @parth007_96 (Opus), @ptrschmdtnlsn (Opus) reached similar success rates, and @reissbaker made a pretty successful GPT-3.5 fine-tune. But it was only late that night that @futuristfrog posted a tweet claiming to have achieved near 100% success rate, by prompting alone. And he was right. On my first run, it scored 47/50, granting him the prize, and completing the challenge.

## How it works!?

The secret to his prompt is... going to remain a secret! That's because he kindly agreed to give 25% of the prize to the most efficient solution. This prompt costs $1+ per inference, so, if you think you can improve on that, you have until next Wednesday to submit your solution in the link below, and compete for the remaining $2.5k! Thanks, Bob.

## How do I stand?

Corrected! My initial claim was absolutely WRONG - for which I apologize. I doubted the GPT architecture would be able to solve certain problems which it, with no margin for doubt, solved. Does that prove GPTs will cure Cancer? No. But it does prove me wrong!

Note there is still a small problem with this: it isn't clear whether Opus is based on the original GPT architecture or not. All GPT-4 versions failed. If Opus turns out to be a new architecture... well, this whole thing would have, ironically, just proven my whole point 😅 But, for the sake of the competition, and in all fairness, Opus WAS listed as an option, so, the prize is warranted.

## Who I am and what I'm trying to sell?

Wrong! I won't turn this into an ad. But, yes, if you're new here, I AM building some stuff, and, yes, just like today, I constantly validate my claims to make sure I can deliver on my promises. But that's all I'm gonna say, so, if you're curious, you'll have to find out for yourself (:

####

That's all. Thanks for all who participated, and, again - sorry for being a wrong guy on the internet today! See you.

Gist: https://t.co/qpSlUMXOTU

I *WAS* WRONG - $10K CLAIMED! ## The Claim Two days ago, I confidently claimed that "GPTs will NEVER solve the A::B problem". I believed that: 1. GPTs can't truly learn new problems, outside of their training set, 2. GPTs can't perform long-term reasoning, no matter how simple it is. I argued both of these are necessary to invent new science; after all, some math problems take years to solve. If you can't beat a 15yo in any given intellectual task, you're not going to prove the Riemann Hypothesis. To isolate these issues and raise my point, I designed the A::B problem, and posted it here - full definition in the quoted tweet. ## Reception, Clarification and Challenge Shortly after posting it, some users provided a solution to a specific 7-token example I listed. I quickly pointed that this wasn't what I meant; that this example was merely illustrative, and that answering one instance isn't the same as solving a problem (and can be easily cheated by prompt manipulation). So, to make my statement clear, and to put my money where my mouth is, I offered a $10k prize to whoever could design a prompt that solved the A::B problem for *random* 12-token instances, with 90%+ success rate. That's still an easy task, that takes an average of 6 swaps to solve; literally simpler than 3rd grade arithmetic. Yet, I firmly believed no GPT would be able to learn and solve it on-prompt, even for these small instances. ## Solutions and Winner Hours later, many solutions were submitted. Initially, all failed, barely reaching 10% success rates. I was getting fairly confident, until, later that day, @ptrschmdtnlsn and @SardonicSydney submitted a solution that humbled me. Under their prompt, Claude-3 Opus was able to generalize from a few examples to arbitrary random instances, AND stick to the rules, carrying long computations with almost zero errors. On my run, it achieved a 56% success rate. Through the day, users @dontoverfit (Opus), @hubertyuan_ (GPT-4), @JeremyKritz (Opus) and @parth007_96 (Opus), @ptrschmdtnlsn (Opus) reached similar success rates, and @reissbaker made a pretty successful GPT-3.5 fine-tune. But it was only late that night that @futuristfrog posted a tweet claiming to have achieved near 100% success rate, by prompting alone. And he was right. On my first run, it scored 47/50, granting him the prize, and completing the challenge. ## How it works!? The secret to his prompt is... going to remain a secret! That's because he kindly agreed to give 25% of the prize to the most efficient solution. This prompt costs $1+ per inference, so, if you think you can improve on that, you have until next Wednesday to submit your solution in the link below, and compete for the remaining $2.5k! Thanks, Bob. ## How do I stand? Corrected! My initial claim was absolutely WRONG - for which I apologize. I doubted the GPT architecture would be able to solve certain problems which it, with no margin for doubt, solved. Does that prove GPTs will cure Cancer? No. But it does prove me wrong! Note there is still a small problem with this: it isn't clear whether Opus is based on the original GPT architecture or not. All GPT-4 versions failed. If Opus turns out to be a new architecture... well, this whole thing would have, ironically, just proven my whole point 😅 But, for the sake of the competition, and in all fairness, Opus WAS listed as an option, so, the prize is warranted. ## Who I am and what I'm trying to sell? Wrong! I won't turn this into an ad. But, yes, if you're new here, I AM building some stuff, and, yes, just like today, I constantly validate my claims to make sure I can deliver on my promises. But that's all I'm gonna say, so, if you're curious, you'll have to find out for yourself (: #### That's all. Thanks for all who participated, and, again - sorry for being a wrong guy on the internet today! See you. Gist: https://t.co/qpSlUMXOTU

(The winning prompt will be published Wednesday, as well as the source code for the evaluator itself. Its hash is on the Gist.)

avatar for Taelin
Taelin
Sun Apr 07 19:01:58
  • Previous
  • 1
  • More pages
  • 5297
  • 5298
  • 5299
  • More pages
  • 5327
  • Next