Keep on to blur preview images; turn off to show them clearly

Blog posts on relevant milestones, with links to the original references: 2010: Breakthrough of end-to-end deep learning on NVIDIA GPUs. Our simple but deep neural network (NN) on GPUs broke the MNIST benchmark. No incremental layer-by-layer training. No unsupervised pre-training https://t.co/MfcBRTf2qm 2011: DanNet on NVIDIA GPUs triggers deep CNN revolution https://t.co/g0A05dlETs 2011: DannNet, the deep convolutional NN, wins Chinese handwriting competition https://t.co/cfc4rhtPon 2011: DanNet achieves first superhuman visual pattern recognition https://t.co/MHpWsQmaAd March 2012: DanNet becomes first NN to win an image segmentation competition https://t.co/tUcK9v0Z3n Sept 2012: DanNet becomes first NN to win a medical imaging contest https://t.co/sclXwEyT0Y May 2015: Highway Networks - over 10x deeper than previous neural nets, based on LSTM's 1991 principle of residual connections. Open-gated variant: ResNet (published 7 months later). Deep learning is all about depth. LSTM: unlimited depth for recurrent nets. Highway Nets: for feedforward nets https://t.co/Mr46rQnqPC 2017: history of computer vision contests won by deep CNNs on NVIDIA GPUs https://t.co/VxZOIF4ALo 2022: ChatGPT uses principles of 1991 (when compute was 10 million times more expensive than today) - the 1991 system is now called an unnormalised linear Transformer. Tweet: https://t.co/loW60fKCyU Overview: https://t.co/jYOUdmqZUM 2022: annotated history of modern AI and deep learning https://t.co/Ys0dw5hkF4 Today's training sets are much bigger: in 2010, it was just MNIST, now it's the entire Internet!


physics of language models @ Meta (FAIR, not GenAI, not TBD) 🎓:Tsinghua Physics — MIT CSAIL — Princeton/IAS 🏅:IOI x 2 — ACM-ICPC — USACO — Codejam — math MCM


physics of language models @ Meta (FAIR, not GenAI, not TBD) 🎓:Tsinghua Physics — MIT CSAIL — Princeton/IAS 🏅:IOI x 2 — ACM-ICPC — USACO — Codejam — math MCM


physics of language models @ Meta (FAIR, not GenAI, not TBD) 🎓:Tsinghua Physics — MIT CSAIL — Princeton/IAS 🏅:IOI x 2 — ACM-ICPC — USACO — Codejam — math MCM


For the latest updates on breaking news visit our website https://t.co/cs4EQ2odpE #seriouslypopular


Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.
