Keep on to blur preview images; turn off to show them clearly

@frxiaobei 的观点:


II Quelques faits : Le temps de travail par actif des français est supérieur (1489h) à celui des allemands (1335h). Le contrat de travail français prévoit en moyenne 38,2 heures par semaine, contre 36,6 heures pour les Allemands,


El fenómeno del free-rider ha sido sobredimensionado. Lejos de ser un problema grave, su conducta es económica, legal y ética: no viola derechos, no impone costos ni consume bienes económicos en sentido estricto.

Why? In Claude Code Everything is a File, and it knows how to use your computer like you do. Name your files well, and CC will be able to search them like you would. This lets you make custom setups for memory, todos, journals, screenshots and more.


Blog posts on relevant milestones, with links to the original references: 2010: Breakthrough of end-to-end deep learning on NVIDIA GPUs. Our simple but deep neural network (NN) on GPUs broke the MNIST benchmark. No incremental layer-by-layer training. No unsupervised pre-training https://t.co/MfcBRTf2qm 2011: DanNet on NVIDIA GPUs triggers deep CNN revolution https://t.co/g0A05dlETs 2011: DannNet, the deep convolutional NN, wins Chinese handwriting competition https://t.co/cfc4rhtPon 2011: DanNet achieves first superhuman visual pattern recognition https://t.co/MHpWsQmaAd March 2012: DanNet becomes first NN to win an image segmentation competition https://t.co/tUcK9v0Z3n Sept 2012: DanNet becomes first NN to win a medical imaging contest https://t.co/sclXwEyT0Y May 2015: Highway Networks - over 10x deeper than previous neural nets, based on LSTM's 1991 principle of residual connections. Open-gated variant: ResNet (published 7 months later). Deep learning is all about depth. LSTM: unlimited depth for recurrent nets. Highway Nets: for feedforward nets https://t.co/Mr46rQnqPC 2017: history of computer vision contests won by deep CNNs on NVIDIA GPUs https://t.co/VxZOIF4ALo 2022: ChatGPT uses principles of 1991 (when compute was 10 million times more expensive than today) - the 1991 system is now called an unnormalised linear Transformer. Tweet: https://t.co/loW60fKCyU Overview: https://t.co/jYOUdmqZUM 2022: annotated history of modern AI and deep learning https://t.co/Ys0dw5hkF4 Today's training sets are much bigger: in 2010, it was just MNIST, now it's the entire Internet!


For the latest updates on breaking news visit our website https://t.co/cs4EQ2odpE #seriouslypopular
