Keep on to blur preview images; turn off to show them clearly

1. 作为创作者要清楚自己的生态位 以油管为例,这里有观众、有内容创作者、有平台、有广告商。油管是这个商业模式的整合方 有的人为了流量不惜一切代价,怼天怼地怼空气,这会破坏油管的生态。不建议搞。 有的人为了挣钱,到平台想薅一波流量,心态不太对,要意识到平台不是慈善机构,是盈利公司

Why? In Claude Code Everything is a File, and it knows how to use your computer like you do. Name your files well, and CC will be able to search them like you would. This lets you make custom setups for memory, todos, journals, screenshots and more.


Blog posts on relevant milestones, with links to the original references: 2010: Breakthrough of end-to-end deep learning on NVIDIA GPUs. Our simple but deep neural network (NN) on GPUs broke the MNIST benchmark. No incremental layer-by-layer training. No unsupervised pre-training https://t.co/MfcBRTf2qm 2011: DanNet on NVIDIA GPUs triggers deep CNN revolution https://t.co/g0A05dlETs 2011: DannNet, the deep convolutional NN, wins Chinese handwriting competition https://t.co/cfc4rhtPon 2011: DanNet achieves first superhuman visual pattern recognition https://t.co/MHpWsQmaAd March 2012: DanNet becomes first NN to win an image segmentation competition https://t.co/tUcK9v0Z3n Sept 2012: DanNet becomes first NN to win a medical imaging contest https://t.co/sclXwEyT0Y May 2015: Highway Networks - over 10x deeper than previous neural nets, based on LSTM's 1991 principle of residual connections. Open-gated variant: ResNet (published 7 months later). Deep learning is all about depth. LSTM: unlimited depth for recurrent nets. Highway Nets: for feedforward nets https://t.co/Mr46rQnqPC 2017: history of computer vision contests won by deep CNNs on NVIDIA GPUs https://t.co/VxZOIF4ALo 2022: ChatGPT uses principles of 1991 (when compute was 10 million times more expensive than today) - the 1991 system is now called an unnormalised linear Transformer. Tweet: https://t.co/loW60fKCyU Overview: https://t.co/jYOUdmqZUM 2022: annotated history of modern AI and deep learning https://t.co/Ys0dw5hkF4 Today's training sets are much bigger: in 2010, it was just MNIST, now it's the entire Internet!


For the latest updates on breaking news visit our website https://t.co/cs4EQ2odpE #seriouslypopular


Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.


Do people *feel* how much work there is still to do. Like wow.
