Keep on to blur preview images; turn off to show them clearly

AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗) dm for promo ,submit papers here: https://t.co/UzmYN5XOCi


Founder of Everything Marketplaces (@marketplaceshq). Always working with & investing in marketplaces at https://t.co/HgyZIpWIEQ


We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.


🌏 RCBI advisor & offshore services for HNWI, business owners with a focus on:🇨🇭🇲🇾🇰🇳🇵🇾🇻🇺🇳🇷🇵🇦🇱🇻🇦🇪🇭🇰 | Geopolitics | Healthy lifestyle


It's actually described in RFC 1661 as the PPP protocol https://t.co/dETAFZASnD

![Incredibly, even Hinton's recent 2025 article [5] fails to cite Ivakhnenko & Lapa, the fathers of deep learning (1965) [1-3][6-10]. @geoffreyhinton claims [5] that his 1985 "Boltzmann machines" (BMs) [11] (actually 1975 Sherrington-Kirkpatrick models [6]) "are no longer used" but "were historically important" because "in the 1980s, they demonstrated that it was possible to learn appropriate weights for hidden neurons using only locally available information WITHOUT requiring a biologically implausible backward pass."
That's ridiculous. This had already been demonstrated 2 decades earlier in the 1960s in Ukraine [1-3]. Ivakhnenko's 1971 paper [3] described a deep learning network with 8 layers and layer-wise training. This depth is comparable to the depth of Hinton's BM-based 2006 "deep belief networks" with layer-wise training [4], published 35 years later without comparison to the original work [1-3] - done when compute was millions of times more expensive.
And indeed, over half a century ago, Ivakhnenko's net learned appropriate weights for hidden neurons WITHOUT requiring a biologically implausible backward pass!
Hinton & Sejnowski & co-workers have repeatedly plagiarized Ivakhnenko and others, and failed to rectify this in later surveys [6-8].
Crazy fact: today (Fri 5 Dec 2025), the inaugural so-called "Sejnowksi-Hinton Prize" will be handed out at NeurIPS 2025 for a related paper on learning without exact backpropagation [12] which also did not mention the original work on deep learning without backward pass [1-3].
What happened to peer review and scientific honesty?
REFERENCES
[1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations.
[2] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55.
[3] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378.
[4] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504-507, 2006.
[5] G. Hinton. Nobel Lecture: Boltzmann machines. Rev. Mod. Phys. 97, 030502, 25 August 2025.
[6] J.S. A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24 (2024, updated 2025).
[7] J.S. How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Dec 2023.
[8] J.S. (2025). Who invented deep learning? Technical Note IDSIA-16-25.
[9] J.S. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Got the first Best Paper Award ever issued by the journal Neural Networks, founded in 1988.
[10] J.S. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, 2022, arXiv:2212.11279.
[11] D. Ackley, G. Hinton, T. Sejnowski (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(1):147-169.
[12] T. P. Lillicrap, D. Cownden, D. B. Tweed, C. J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications vol. 7, 13276 (2016). Incredibly, even Hinton's recent 2025 article [5] fails to cite Ivakhnenko & Lapa, the fathers of deep learning (1965) [1-3][6-10]. @geoffreyhinton claims [5] that his 1985 "Boltzmann machines" (BMs) [11] (actually 1975 Sherrington-Kirkpatrick models [6]) "are no longer used" but "were historically important" because "in the 1980s, they demonstrated that it was possible to learn appropriate weights for hidden neurons using only locally available information WITHOUT requiring a biologically implausible backward pass."
That's ridiculous. This had already been demonstrated 2 decades earlier in the 1960s in Ukraine [1-3]. Ivakhnenko's 1971 paper [3] described a deep learning network with 8 layers and layer-wise training. This depth is comparable to the depth of Hinton's BM-based 2006 "deep belief networks" with layer-wise training [4], published 35 years later without comparison to the original work [1-3] - done when compute was millions of times more expensive.
And indeed, over half a century ago, Ivakhnenko's net learned appropriate weights for hidden neurons WITHOUT requiring a biologically implausible backward pass!
Hinton & Sejnowski & co-workers have repeatedly plagiarized Ivakhnenko and others, and failed to rectify this in later surveys [6-8].
Crazy fact: today (Fri 5 Dec 2025), the inaugural so-called "Sejnowksi-Hinton Prize" will be handed out at NeurIPS 2025 for a related paper on learning without exact backpropagation [12] which also did not mention the original work on deep learning without backward pass [1-3].
What happened to peer review and scientific honesty?
REFERENCES
[1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations.
[2] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55.
[3] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378.
[4] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504-507, 2006.
[5] G. Hinton. Nobel Lecture: Boltzmann machines. Rev. Mod. Phys. 97, 030502, 25 August 2025.
[6] J.S. A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24 (2024, updated 2025).
[7] J.S. How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Dec 2023.
[8] J.S. (2025). Who invented deep learning? Technical Note IDSIA-16-25.
[9] J.S. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Got the first Best Paper Award ever issued by the journal Neural Networks, founded in 1988.
[10] J.S. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, 2022, arXiv:2212.11279.
[11] D. Ackley, G. Hinton, T. Sejnowski (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(1):147-169.
[12] T. P. Lillicrap, D. Cownden, D. B. Tweed, C. J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications vol. 7, 13276 (2016).](/_next/image?url=https%3A%2F%2Fpbs.twimg.com%2Fprofile_images%2F1715797038535680000%2FZFrYnYWD_400x400.jpg&w=3840&q=75)
Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.
