LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

如果你有想学习 Prompt Engineering(提示工程),但是又不知道从哪里寻找合适的学习资料,这里是我了解的一些优质资源,持续更新,也欢迎补充。

如果你有想学习 Prompt Engineering(提示工程),但是又不知道从哪里寻找合适的学习资料,这里是我了解的一些优质资源,持续更新,也欢迎补充。

Prompt Engineering Guide https://t.co/9zEoOCH6MR 这是一个开源的 Prompt Engineering 学习资源网站,循序渐进系统的讲解了提示工程的方方面面,并且包含多语言版,中文版我还贡献了几页内容的翻译。 这个网站适合系统的快速浏览一遍有一个全局了解,时不时回头来翻一翻。

avatar for 宝玉
宝玉
Mon Sep 02 06:04:51
[1/9] 截至昨天,Lobe Chat Cloud (https://t.co/z4k5TITVKc)开启公测一个月,达到第一个小里程碑 $1000+ MRR ,营收来到 3 万人民币($4000+)。 和大家聊聊我们这个月的实践中新学到的东西,以及一些新的感悟:

[1/9] 截至昨天,Lobe Chat Cloud (https://t.co/z4k5TITVKc)开启公测一个月,达到第一个小里程碑 $1000+ MRR ,营收来到 3 万人民币($4000+)。 和大家聊聊我们这个月的实践中新学到的东西,以及一些新的感悟:

[2/9] 首先先聊聊详细聊聊营收数据,30多天下来, MRR 正式突破了 1,000 刀小关卡,营收总额大约 $4k+,折合人民币的话大概 3w 多一些,感觉算是有了一个还算可以的开头。 而非常让我意外的一点是,达成 $1000 MRR 订阅的用户数,只需要 58 人。我在回顾看到这个数据时,突然真切地感觉到宏大叙事的薄纱被刺破了。在传统互联网语境下,似乎只有产品达到千万、乃至亿级别规模的产品才能算成功,只是百万级别、十万级别甚至都上不了台面。 但对于我们来说,可能真的不需要去考虑做到百万、千万级别月活的产品,而只需要能获得一千到两千名愿意为我们持续付费订阅的用户,就可以让我们这个团队长久稳定地运作下去。 LobeChat 的开源也不再是只能用爱发电的兴趣爱好,还可以真正变成一项为之持续付出的事业。虽然听过很多讲开源的故事,但我们在往这个方向努力并开始带来正向反馈的时候,这个感受是更加强烈的。

avatar for 空谷 Arvin Xu
空谷 Arvin Xu
Fri Aug 16 15:53:46
Mythos Friedensbewegung 🧵

Pazifismus als Haltung, Anwendung von Gewalt grundsätzlich abzulehnen, ist ein respektabler ethischer Standpunkt.

Warum die Tradition des Missbrauchs dieser Grundhaltung -durch Täuschung und Manipulation-
zur Gefahr für unsere Sicherheit wird:
1/25

Mythos Friedensbewegung 🧵 Pazifismus als Haltung, Anwendung von Gewalt grundsätzlich abzulehnen, ist ein respektabler ethischer Standpunkt. Warum die Tradition des Missbrauchs dieser Grundhaltung -durch Täuschung und Manipulation- zur Gefahr für unsere Sicherheit wird: 1/25

In einem WDR-Radiointerview bekannten sich Moderatorin und Gast Sigmar Gabriel, als junge Menschen an der großen Friedenskundgebung im Bonner Hofgarten 1983 teilgenommen zu haben. Diese prägende Erfahrung der 80er-Friedensbewegung kann aber zur Manipulation genutzt werden. 2/25

avatar for Pedro
Pedro
Mon Aug 05 14:44:44
🧵1/ I am somewhat dismayed (though sadly unsurprised) by this extremely disingenuous and callous reporting about Palestinian death figures. Claims that the UN have revised down its death toll are absolutely untrue but have been repeated by right wing outlets but also the CFR 🤔

🧵1/ I am somewhat dismayed (though sadly unsurprised) by this extremely disingenuous and callous reporting about Palestinian death figures. Claims that the UN have revised down its death toll are absolutely untrue but have been repeated by right wing outlets but also the CFR 🤔

2/ These media are focusing on these two infographics by the @ochaopt . The one on the left, published 6th May, the one on the right published 8th May. As you can see, total fatalities INCREASE, as opposed to decrease. What the propaganda is seaking to do is barely sophistry

avatar for Marc Owen Jones
Marc Owen Jones
Mon May 13 16:19:45
Feliz lunes 🪶. Compartimos la experiencia de una extrabajadora sobre los abusos de la editorial @beetruvian a sus trabajadores y también el cómo estafa a les lectores vendiendo libros traducidos con Google Translate.

Feliz lunes 🪶. Compartimos la experiencia de una extrabajadora sobre los abusos de la editorial @beetruvian a sus trabajadores y también el cómo estafa a les lectores vendiendo libros traducidos con Google Translate.

Es posible que algunos sepáis que hace unos meses surgieron ciertos rumores sobre las traducciones de la editorial Beetruvian. Hoy me gustaría contar mi experiencia como antigua trabajadora de la editorial y confirmar algunos de esos rumores.

avatar for SEGAP🐦
SEGAP🐦
Mon May 13 07:13:47
I *WAS* WRONG - $10K CLAIMED!

## The Claim

Two days ago, I confidently claimed that "GPTs will NEVER solve the A::B problem". I believed that: 1. GPTs can't truly learn new problems, outside of their training set, 2. GPTs can't perform long-term reasoning, no matter how simple it is. I argued both of these are necessary to invent new science; after all, some math problems take years to solve. If you can't beat a 15yo in any given intellectual task, you're not going to prove the Riemann Hypothesis. To isolate these issues and raise my point, I designed the A::B problem, and posted it here - full definition in the quoted tweet.

## Reception, Clarification and Challenge

Shortly after posting it, some users provided a solution to a specific 7-token example I listed. I quickly pointed that this wasn't what I meant; that this example was merely illustrative, and that answering one instance isn't the same as solving a problem (and can be easily cheated by prompt manipulation).

So, to make my statement clear, and to put my money where my mouth is, I offered a $10k prize to whoever could design a prompt that solved the A::B problem for *random* 12-token instances, with 90%+ success rate. That's still an easy task, that takes an average of 6 swaps to solve; literally simpler than 3rd grade arithmetic. Yet, I firmly believed no GPT would be able to learn and solve it on-prompt, even for these small instances.

## Solutions and Winner

Hours later, many solutions were submitted. Initially, all failed, barely reaching 10% success rates. I was getting fairly confident, until, later that day, @ptrschmdtnlsn and @SardonicSydney submitted a solution that humbled me. Under their prompt, Claude-3 Opus was able to generalize from a few examples to arbitrary random instances, AND stick to the rules, carrying long computations with almost zero errors. On my run, it achieved a 56% success rate.

Through the day, users @dontoverfit (Opus), @hubertyuan_ (GPT-4), @JeremyKritz (Opus) and @parth007_96 (Opus), @ptrschmdtnlsn (Opus) reached similar success rates, and @reissbaker made a pretty successful GPT-3.5 fine-tune. But it was only late that night that @futuristfrog posted a tweet claiming to have achieved near 100% success rate, by prompting alone. And he was right. On my first run, it scored 47/50, granting him the prize, and completing the challenge.

## How it works!?

The secret to his prompt is... going to remain a secret! That's because he kindly agreed to give 25% of the prize to the most efficient solution. This prompt costs $1+ per inference, so, if you think you can improve on that, you have until next Wednesday to submit your solution in the link below, and compete for the remaining $2.5k! Thanks, Bob.

## How do I stand?

Corrected! My initial claim was absolutely WRONG - for which I apologize. I doubted the GPT architecture would be able to solve certain problems which it, with no margin for doubt, solved. Does that prove GPTs will cure Cancer? No. But it does prove me wrong!

Note there is still a small problem with this: it isn't clear whether Opus is based on the original GPT architecture or not. All GPT-4 versions failed. If Opus turns out to be a new architecture... well, this whole thing would have, ironically, just proven my whole point 😅 But, for the sake of the competition, and in all fairness, Opus WAS listed as an option, so, the prize is warranted.

## Who I am and what I'm trying to sell?

Wrong! I won't turn this into an ad. But, yes, if you're new here, I AM building some stuff, and, yes, just like today, I constantly validate my claims to make sure I can deliver on my promises. But that's all I'm gonna say, so, if you're curious, you'll have to find out for yourself (:

####

That's all. Thanks for all who participated, and, again - sorry for being a wrong guy on the internet today! See you.

Gist: https://t.co/qpSlUMXOTU

I *WAS* WRONG - $10K CLAIMED! ## The Claim Two days ago, I confidently claimed that "GPTs will NEVER solve the A::B problem". I believed that: 1. GPTs can't truly learn new problems, outside of their training set, 2. GPTs can't perform long-term reasoning, no matter how simple it is. I argued both of these are necessary to invent new science; after all, some math problems take years to solve. If you can't beat a 15yo in any given intellectual task, you're not going to prove the Riemann Hypothesis. To isolate these issues and raise my point, I designed the A::B problem, and posted it here - full definition in the quoted tweet. ## Reception, Clarification and Challenge Shortly after posting it, some users provided a solution to a specific 7-token example I listed. I quickly pointed that this wasn't what I meant; that this example was merely illustrative, and that answering one instance isn't the same as solving a problem (and can be easily cheated by prompt manipulation). So, to make my statement clear, and to put my money where my mouth is, I offered a $10k prize to whoever could design a prompt that solved the A::B problem for *random* 12-token instances, with 90%+ success rate. That's still an easy task, that takes an average of 6 swaps to solve; literally simpler than 3rd grade arithmetic. Yet, I firmly believed no GPT would be able to learn and solve it on-prompt, even for these small instances. ## Solutions and Winner Hours later, many solutions were submitted. Initially, all failed, barely reaching 10% success rates. I was getting fairly confident, until, later that day, @ptrschmdtnlsn and @SardonicSydney submitted a solution that humbled me. Under their prompt, Claude-3 Opus was able to generalize from a few examples to arbitrary random instances, AND stick to the rules, carrying long computations with almost zero errors. On my run, it achieved a 56% success rate. Through the day, users @dontoverfit (Opus), @hubertyuan_ (GPT-4), @JeremyKritz (Opus) and @parth007_96 (Opus), @ptrschmdtnlsn (Opus) reached similar success rates, and @reissbaker made a pretty successful GPT-3.5 fine-tune. But it was only late that night that @futuristfrog posted a tweet claiming to have achieved near 100% success rate, by prompting alone. And he was right. On my first run, it scored 47/50, granting him the prize, and completing the challenge. ## How it works!? The secret to his prompt is... going to remain a secret! That's because he kindly agreed to give 25% of the prize to the most efficient solution. This prompt costs $1+ per inference, so, if you think you can improve on that, you have until next Wednesday to submit your solution in the link below, and compete for the remaining $2.5k! Thanks, Bob. ## How do I stand? Corrected! My initial claim was absolutely WRONG - for which I apologize. I doubted the GPT architecture would be able to solve certain problems which it, with no margin for doubt, solved. Does that prove GPTs will cure Cancer? No. But it does prove me wrong! Note there is still a small problem with this: it isn't clear whether Opus is based on the original GPT architecture or not. All GPT-4 versions failed. If Opus turns out to be a new architecture... well, this whole thing would have, ironically, just proven my whole point 😅 But, for the sake of the competition, and in all fairness, Opus WAS listed as an option, so, the prize is warranted. ## Who I am and what I'm trying to sell? Wrong! I won't turn this into an ad. But, yes, if you're new here, I AM building some stuff, and, yes, just like today, I constantly validate my claims to make sure I can deliver on my promises. But that's all I'm gonna say, so, if you're curious, you'll have to find out for yourself (: #### That's all. Thanks for all who participated, and, again - sorry for being a wrong guy on the internet today! See you. Gist: https://t.co/qpSlUMXOTU

(The winning prompt will be published Wednesday, as well as the source code for the evaluator itself. Its hash is on the Gist.)

avatar for Taelin
Taelin
Sun Apr 07 19:01:58
  • Previous
  • 1
  • More pages
  • 2089
  • 2090
  • 2091
  • More pages
  • 2117
  • Next