LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Claude Code is All You Need

When I first joined Anthropic I was surprised to learn that lots of the team used Claude Code as a general agent, not just for code.

I’ve since become a convert! I use Claude Code to help me with almost all the work I do now, here’s how:

Claude Code is All You Need When I first joined Anthropic I was surprised to learn that lots of the team used Claude Code as a general agent, not just for code. I’ve since become a convert! I use Claude Code to help me with almost all the work I do now, here’s how:

Why? In Claude Code Everything is a File, and it knows how to use your computer like you do. Name your files well, and CC will be able to search them like you would. This lets you make custom setups for memory, todos, journals, screenshots and more.

avatar for Thariq
Thariq
Mon Jul 14 21:51:51
Congrats to @NVIDIA, the first public $4T company! Today, compute is 100000x cheaper, and $NVDA 4000x more valuable than in the 1990s when we worked on unleashing the true potential of neural networks. Thanks to Jensen Huang (see image) for generously funding our research 🚀

Congrats to @NVIDIA, the first public $4T company! Today, compute is 100000x cheaper, and $NVDA 4000x more valuable than in the 1990s when we worked on unleashing the true potential of neural networks. Thanks to Jensen Huang (see image) for generously funding our research 🚀

Blog posts on relevant milestones, with links to the original references: 2010: Breakthrough of end-to-end deep learning on NVIDIA GPUs. Our simple but deep neural network (NN) on GPUs broke the MNIST benchmark. No incremental layer-by-layer training. No unsupervised pre-training https://t.co/MfcBRTf2qm 2011: DanNet on NVIDIA GPUs triggers deep CNN revolution https://t.co/g0A05dlETs 2011: DannNet, the deep convolutional NN, wins Chinese handwriting competition https://t.co/cfc4rhtPon 2011: DanNet achieves first superhuman visual pattern recognition https://t.co/MHpWsQmaAd March 2012: DanNet becomes first NN to win an image segmentation competition https://t.co/tUcK9v0Z3n Sept 2012: DanNet becomes first NN to win a medical imaging contest https://t.co/sclXwEyT0Y May 2015: Highway Networks - over 10x deeper than previous neural nets, based on LSTM's 1991 principle of residual connections. Open-gated variant: ResNet (published 7 months later). Deep learning is all about depth. LSTM: unlimited depth for recurrent nets. Highway Nets: for feedforward nets https://t.co/Mr46rQnqPC 2017: history of computer vision contests won by deep CNNs on NVIDIA GPUs https://t.co/VxZOIF4ALo 2022: ChatGPT uses principles of 1991 (when compute was 10 million times more expensive than today) - the 1991 system is now called an unnormalised linear Transformer. Tweet: https://t.co/loW60fKCyU Overview: https://t.co/jYOUdmqZUM 2022: annotated history of modern AI and deep learning https://t.co/Ys0dw5hkF4 Today's training sets are much bigger: in 2010, it was just MNIST, now it's the entire Internet!

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Fri Jul 11 14:00:05
Harry and Meghan's right-hand man appears to extend an olive branch to two senior royal household staff including William's aide Jason Knauf who exposed Meghan Markle 'bullying' allegations

Harry and Meghan's right-hand man appears to extend an olive branch to two senior royal household staff including William's aide Jason Knauf who exposed Meghan Markle 'bullying' allegations

For the latest updates on breaking news visit our website https://t.co/cs4EQ2odpE #seriouslypopular

avatar for Daily Mail
Daily Mail
Tue Jul 01 00:19:38
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Do people *feel* how much work there is still to do. Like wow.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
Mikayla Raines, founder of "Save a Fox Rescue" just took her own life.

2 months ago she was seen on r/SaveAfoxSnark attempting to defend herself from her detractors.

Here's how the subreddit's creator responded when she showed up:

It's time to address "Snark" subreddits. 🧵

Mikayla Raines, founder of "Save a Fox Rescue" just took her own life. 2 months ago she was seen on r/SaveAfoxSnark attempting to defend herself from her detractors. Here's how the subreddit's creator responded when she showed up: It's time to address "Snark" subreddits. 🧵

Back in April of 2024, reddit user u/Pale-Explanation-709 created r/SaveAfoxSnark after rumors of a bobcat biting a volunteer surfaced.

avatar for Reddit Lies
Reddit Lies
Tue Jun 24 15:05:50
  • Previous
  • 1
  • More pages
  • 2933
  • 2934
  • 2935
  • More pages
  • 2974
  • Next