LogoThread Easy
  • 発見
  • スレッド作成
LogoThread Easy

Twitter スレッドの万能パートナー

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

油管订阅 6666 了,分享一些零碎的做视频相关经验。

作为一名独立创作者,从选题、调研、脚本、拍摄、剪辑,全程 Solo。

AI + 自媒体 + 代码是普通人的杠杆

如果你想给自己的人生多一些变量,那么这个帖子可能会对你有启发。

分享的都是个人实战经验,不求面面俱到的客观,只求对你真正有用。

油管订阅 6666 了,分享一些零碎的做视频相关经验。 作为一名独立创作者,从选题、调研、脚本、拍摄、剪辑,全程 Solo。 AI + 自媒体 + 代码是普通人的杠杆 如果你想给自己的人生多一些变量,那么这个帖子可能会对你有启发。 分享的都是个人实战经验,不求面面俱到的客观,只求对你真正有用。

1. 作为创作者要清楚自己的生态位 以油管为例,这里有观众、有内容创作者、有平台、有广告商。油管是这个商业模式的整合方 有的人为了流量不惜一切代价,怼天怼地怼空气,这会破坏油管的生态。不建议搞。 有的人为了挣钱,到平台想薅一波流量,心态不太对,要意识到平台不是慈善机构,是盈利公司

avatar for 海拉鲁编程客
海拉鲁编程客
Tue Jul 15 00:16:03
Claude Code is All You Need

When I first joined Anthropic I was surprised to learn that lots of the team used Claude Code as a general agent, not just for code.

I’ve since become a convert! I use Claude Code to help me with almost all the work I do now, here’s how:

Claude Code is All You Need When I first joined Anthropic I was surprised to learn that lots of the team used Claude Code as a general agent, not just for code. I’ve since become a convert! I use Claude Code to help me with almost all the work I do now, here’s how:

Why? In Claude Code Everything is a File, and it knows how to use your computer like you do. Name your files well, and CC will be able to search them like you would. This lets you make custom setups for memory, todos, journals, screenshots and more.

avatar for Thariq
Thariq
Mon Jul 14 21:51:51
Congrats to @NVIDIA, the first public $4T company! Today, compute is 100000x cheaper, and $NVDA 4000x more valuable than in the 1990s when we worked on unleashing the true potential of neural networks. Thanks to Jensen Huang (see image) for generously funding our research 🚀

Congrats to @NVIDIA, the first public $4T company! Today, compute is 100000x cheaper, and $NVDA 4000x more valuable than in the 1990s when we worked on unleashing the true potential of neural networks. Thanks to Jensen Huang (see image) for generously funding our research 🚀

Blog posts on relevant milestones, with links to the original references: 2010: Breakthrough of end-to-end deep learning on NVIDIA GPUs. Our simple but deep neural network (NN) on GPUs broke the MNIST benchmark. No incremental layer-by-layer training. No unsupervised pre-training https://t.co/MfcBRTf2qm 2011: DanNet on NVIDIA GPUs triggers deep CNN revolution https://t.co/g0A05dlETs 2011: DannNet, the deep convolutional NN, wins Chinese handwriting competition https://t.co/cfc4rhtPon 2011: DanNet achieves first superhuman visual pattern recognition https://t.co/MHpWsQmaAd March 2012: DanNet becomes first NN to win an image segmentation competition https://t.co/tUcK9v0Z3n Sept 2012: DanNet becomes first NN to win a medical imaging contest https://t.co/sclXwEyT0Y May 2015: Highway Networks - over 10x deeper than previous neural nets, based on LSTM's 1991 principle of residual connections. Open-gated variant: ResNet (published 7 months later). Deep learning is all about depth. LSTM: unlimited depth for recurrent nets. Highway Nets: for feedforward nets https://t.co/Mr46rQnqPC 2017: history of computer vision contests won by deep CNNs on NVIDIA GPUs https://t.co/VxZOIF4ALo 2022: ChatGPT uses principles of 1991 (when compute was 10 million times more expensive than today) - the 1991 system is now called an unnormalised linear Transformer. Tweet: https://t.co/loW60fKCyU Overview: https://t.co/jYOUdmqZUM 2022: annotated history of modern AI and deep learning https://t.co/Ys0dw5hkF4 Today's training sets are much bigger: in 2010, it was just MNIST, now it's the entire Internet!

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Fri Jul 11 14:00:05
Harry and Meghan's right-hand man appears to extend an olive branch to two senior royal household staff including William's aide Jason Knauf who exposed Meghan Markle 'bullying' allegations

Harry and Meghan's right-hand man appears to extend an olive branch to two senior royal household staff including William's aide Jason Knauf who exposed Meghan Markle 'bullying' allegations

For the latest updates on breaking news visit our website https://t.co/cs4EQ2odpE #seriouslypopular

avatar for Daily Mail
Daily Mail
Tue Jul 01 00:19:38
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Do people *feel* how much work there is still to do. Like wow.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
  • Previous
  • 1
  • More pages
  • 2086
  • 2087
  • 2088
  • More pages
  • 2127
  • Next