LogoThread Easy
  • 探索
  • 線程創作
LogoThread Easy

Twitter 線程的一站式夥伴

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

I always try to ground this with founders. Someone gave you *a million dollars* to try your high risk, whacky idea you thought of. No great risk to you other than time which you’d spend on it potentially anyway. What a time to be alive.

I always try to ground this with founders. Someone gave you *a million dollars* to try your high risk, whacky idea you thought of. No great risk to you other than time which you’d spend on it potentially anyway. What a time to be alive.

Founder: @mixpanel Pizzatarian, programmer, music maker

avatar for Suhail
Suhail
Sun Nov 30 16:05:45
> Location: Wuhan, China
See? This is Soft Power, doggy. Not just dazzling westoids.
smelt your iron, install your 5G in Africa. People working on all that will goon to weeb content and feel spiritual allegiance to Glorious Nippon. You could go with Celestial Kingdom but nooo…

> Location: Wuhan, China See? This is Soft Power, doggy. Not just dazzling westoids. smelt your iron, install your 5G in Africa. People working on all that will goon to weeb content and feel spiritual allegiance to Glorious Nippon. You could go with Celestial Kingdom but nooo…

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Sun Nov 30 16:02:55
Build a product.
Build a business.
Build an audience.

I highly recommend buying the bundle before it jumps from $25 back to $100+

Build a product. Build a business. Build an audience. I highly recommend buying the bundle before it jumps from $25 back to $100+

Building https://t.co/od97B0HVrk and https://t.co/666FnyVVE0 in Public. Raising all the boats with kindness. 🎙️ https://t.co/6w69DZmi8H · ✍️ https://t.co/lpnor5rsTW

avatar for Arvid Kahl
Arvid Kahl
Sun Nov 30 16:01:35
Quick reminder: two days left (until Dec 1) for Black Friday 40% off LaravelDaily or FilamentExamples.

- 40% off Yearly/Lifetime on https://t.co/N72SAFWfTX (no coupon needed)

- 40% off any plan on https://t.co/ESISILtjGB (coupon FRIDAY25)

Quick reminder: two days left (until Dec 1) for Black Friday 40% off LaravelDaily or FilamentExamples. - 40% off Yearly/Lifetime on https://t.co/N72SAFWfTX (no coupon needed) - 40% off any plan on https://t.co/ESISILtjGB (coupon FRIDAY25)

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Sun Nov 30 16:00:30
Continued with this one: hypnotic tale, still drawing from Kafka, but surprisingly foreshadowing Philip K. Dick (the snow is a hallucinogenic mushroom). Also a political tale, which is less surprising given the first edition: Germany, 1932.

Continued with this one: hypnotic tale, still drawing from Kafka, but surprisingly foreshadowing Philip K. Dick (the snow is a hallucinogenic mushroom). Also a political tale, which is less surprising given the first edition: Germany, 1932.

Artisanal baker of reasoning models @pleiasfr

avatar for Alexander Doria
Alexander Doria
Sun Nov 30 16:00:27
Modern AI is based on artificial neural nets (NNs). Who invented them? https://t.co/ZCI8ZrEKnZ

Biological neural nets were discovered in the 1880s [CAJ88-06]. The term "neuron" was coined in 1891 [CAJ06]. Many think that NNs were developed AFTER that. But that's not the case: the first "modern" NNs with 2 layers of units were invented over 2 centuries ago (1795-1805) by Legendre (1805) and Gauss (1795, unpublished) [STI81], when compute was many trillions of times more expensive than in 2025.

True, the terminology of artificial neural nets was introduced only much later in the 1900s. For example, certain non-learning NNs were discussed in 1943 [MC43]. Informal thoughts about a simple NN learning rule were published in 1948 [HEB48]. Evolutionary computation for NNs was mentioned in an unpublished 1948 report [TUR1]. Various concrete learning NNs were published in 1958 [R58], 1961 [R61][ST61-95], and 1962 [WID62].

However, while these NN papers of the mid 1900s are of historical interest, THEY HAVE ACTUALLY LESS TO DO WITH MODERN AI THAN THE MUCH OLDER ADAPTIVE NN by Gauss & Legendre, still heavily used today, the very foundation of all NNs, including the recent deeper NNs [DL25]. 

The Gauss-Legendre NN from over 2 centuries ago [NN25] has an input layer with several input units, and an output layer. For simplicity, let's assume the latter consists of a single output unit. Each input unit can hold a real-valued number and is connected to the output unit by a connection with a real-valued weight. The NN's output is the sum of the products of the inputs and their weights. Given a training set of input vectors and desired target values for each of them, the NN weights are adjusted such that the sum of the squared errors between the NN outputs and the corresponding targets is minimized [DLH]. Now the NN can be used to process previously unseen test data. 

Of course, back then this was not called an NN, because people didn't even know about biological neurons yet - the first microscopic image of a nerve cell was created decades later by Valentin in 1836, and the term "neuron" was coined by Waldeyer in 1891 [CAJ06]. Instead, the technique was called the Method of Least Squares, also widely known in statistics as Linear Regression. But it is MATHEMATICALLY IDENTICAL to today's linear 2-layer NNs: SAME basic algorithm, SAME error function, SAME adaptive parameters/weights. Such simple NNs perform "shallow learning," as opposed to "deep learning" with many nonlinear layers [DL25]. In fact, many modern NN courses start by introducing this method, then move on to more complex, deeper NNs [DLH].

Even the applications of the early 1800s were similar to today's: learn to predict the next element of a sequence, given previous elements. THAT'S WHAT CHATGPT DOES! The first famous example of pattern recognition through an NN dates back over 200 years: the rediscovery of the dwarf planet Ceres in 1801 through Gauss, who collected noisy data points from previous astronomical observations, then used them to adjust the parameters of a predictor, which essentially learned to generalise from the training data to correctly predict the new location of Ceres. That's what made the young Gauss famous [DLH].

The old Gauss-Legendre NNs are still being used today in innumerable applications. What's the main difference to the NNs used in some of the impressive AI applications since the 2010s? The latter are typically much deeper and have many intermediate layers of learning "hidden" units. Who invented this? Short answer: Ivakhnenko & Lapa (1965) [DEEP1-2]. Others refined this [DLH]. See also: who invented deep learning [DL25]?

Some people still believe that modern NNs were somehow inspired by the biological brain. But that's simply not true: decades before biological nerve cells were discovered, plain engineering and mathematical problem solving already led to what's now called NNs. In fact, in the past 2 centuries, not so much has changed in AI research: as of 2025, NN progress is still mostly driven by engineering, not by neurophysiological insights. (Certain exceptions dating back many decades [CN25] confirm the rule.) 

Footnote 1. In 1958, simple NNs in the style of Gauss & Legendre were combined with an output threshold function to obtain pattern classifiers called Perceptrons [R58][R61][DLH]. Astonishingly, the authors [R58][R61] seemed unaware of the much earlier NN (1795-1805) famously known in the field of statistics as "method of least squares" or "linear regression." Remarkably, today's most frequently used 2-layer NNs are those of Gauss & Legendre, not those of the 1940s [MC43] and 1950s [R58] (which were not even differentiable)! 

SELECTED REFERENCES (many additional references in [NN25] - see link above):

[CAJ88] S. R. Cajal. Estructura de los centros nerviosos de las aves. Rev. Trim. Histol. Norm. Patol., 1 (1888), pp. 1-10.

[CAJ88b] S. R. Cajal. Sobre las fibras nerviosas de la capa molecular del cerebelo. Rev. Trim. Histol. Norm. Patol., 1 (1888), pp. 33-49.

[CAJ89] Conexión general de los elementos nerviosos. Med. Práct., 2 (1889), pp. 341-346.

[CAJ06] F. López-Muñoz, J. Boya b, C. Alamo (2006). Neuron theory, the cornerstone of neuroscience, on the centenary of the Nobel Prize award to Santiago Ramón y Cajal. Brain Research Bulletin, Volume 70, Issues 4–6, 16 October 2006, Pages 391-405.

[CN25] J. Schmidhuber (AI Blog, 2025). Who invented convolutional neural networks?

[DEEP1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations.

[DEEP1a] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55.

[DEEP2] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378.

[DL25] J. Schmidhuber. Who invented deep learning? Technical Note IDSIA-16-25, IDSIA, November 2025.

[DLH] J. Schmidhuber. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Lugano, Switzerland, 2022. Preprint arXiv:2212.11279. 

[HEB48] J. Konorski (1948). Conditioned reflexes and neuron organization. Translation from the Polish manuscript under the author's supervision. Cambridge University Press, 1948. Konorski published the so-called "Hebb rule" before Hebb [HEB49].

[HEB49] D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949. Konorski [HEB48] published the so-called "Hebb rule" before Hebb. 

[MC43] W. S. McCulloch, W. Pitts. A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, p. 115-133, 1943.

[NN25] J. Schmidhuber. Who invented artificial neural networks? Technical Note IDSIA-15-25, IDSIA, November 2025.

[R58] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386. 

[R61] Joseph, R. D. (1961). Contributions to perceptron theory. PhD thesis, Cornell Univ.

[R62] Rosenblatt, F. (1962). Principles of Neurodynamics. Spartan, New York.

[ST61] K. Steinbuch. Die Lernmatrix. (The learning matrix.) Kybernetik, 1(1):36-45, 1961. 

[TUR1] A. M. Turing. Intelligent Machinery. Unpublished Technical Report, 1948. In: Ince DC, editor. Collected works of AM Turing—Mechanical Intelligence. Elsevier Science Publishers, 1992.

[STI81] S. M. Stigler. Gauss and the Invention of Least Squares. Ann. Stat. 9(3):465-474, 1981. 

[WID62] Widrow, B. and Hoff, M. (1962). Associative storage and retrieval of digital information in networks of adaptive neurons. Biological Prototypes and Synthetic Systems, 1:160, 1962.

Modern AI is based on artificial neural nets (NNs). Who invented them? https://t.co/ZCI8ZrEKnZ Biological neural nets were discovered in the 1880s [CAJ88-06]. The term "neuron" was coined in 1891 [CAJ06]. Many think that NNs were developed AFTER that. But that's not the case: the first "modern" NNs with 2 layers of units were invented over 2 centuries ago (1795-1805) by Legendre (1805) and Gauss (1795, unpublished) [STI81], when compute was many trillions of times more expensive than in 2025. True, the terminology of artificial neural nets was introduced only much later in the 1900s. For example, certain non-learning NNs were discussed in 1943 [MC43]. Informal thoughts about a simple NN learning rule were published in 1948 [HEB48]. Evolutionary computation for NNs was mentioned in an unpublished 1948 report [TUR1]. Various concrete learning NNs were published in 1958 [R58], 1961 [R61][ST61-95], and 1962 [WID62]. However, while these NN papers of the mid 1900s are of historical interest, THEY HAVE ACTUALLY LESS TO DO WITH MODERN AI THAN THE MUCH OLDER ADAPTIVE NN by Gauss & Legendre, still heavily used today, the very foundation of all NNs, including the recent deeper NNs [DL25]. The Gauss-Legendre NN from over 2 centuries ago [NN25] has an input layer with several input units, and an output layer. For simplicity, let's assume the latter consists of a single output unit. Each input unit can hold a real-valued number and is connected to the output unit by a connection with a real-valued weight. The NN's output is the sum of the products of the inputs and their weights. Given a training set of input vectors and desired target values for each of them, the NN weights are adjusted such that the sum of the squared errors between the NN outputs and the corresponding targets is minimized [DLH]. Now the NN can be used to process previously unseen test data. Of course, back then this was not called an NN, because people didn't even know about biological neurons yet - the first microscopic image of a nerve cell was created decades later by Valentin in 1836, and the term "neuron" was coined by Waldeyer in 1891 [CAJ06]. Instead, the technique was called the Method of Least Squares, also widely known in statistics as Linear Regression. But it is MATHEMATICALLY IDENTICAL to today's linear 2-layer NNs: SAME basic algorithm, SAME error function, SAME adaptive parameters/weights. Such simple NNs perform "shallow learning," as opposed to "deep learning" with many nonlinear layers [DL25]. In fact, many modern NN courses start by introducing this method, then move on to more complex, deeper NNs [DLH]. Even the applications of the early 1800s were similar to today's: learn to predict the next element of a sequence, given previous elements. THAT'S WHAT CHATGPT DOES! The first famous example of pattern recognition through an NN dates back over 200 years: the rediscovery of the dwarf planet Ceres in 1801 through Gauss, who collected noisy data points from previous astronomical observations, then used them to adjust the parameters of a predictor, which essentially learned to generalise from the training data to correctly predict the new location of Ceres. That's what made the young Gauss famous [DLH]. The old Gauss-Legendre NNs are still being used today in innumerable applications. What's the main difference to the NNs used in some of the impressive AI applications since the 2010s? The latter are typically much deeper and have many intermediate layers of learning "hidden" units. Who invented this? Short answer: Ivakhnenko & Lapa (1965) [DEEP1-2]. Others refined this [DLH]. See also: who invented deep learning [DL25]? Some people still believe that modern NNs were somehow inspired by the biological brain. But that's simply not true: decades before biological nerve cells were discovered, plain engineering and mathematical problem solving already led to what's now called NNs. In fact, in the past 2 centuries, not so much has changed in AI research: as of 2025, NN progress is still mostly driven by engineering, not by neurophysiological insights. (Certain exceptions dating back many decades [CN25] confirm the rule.) Footnote 1. In 1958, simple NNs in the style of Gauss & Legendre were combined with an output threshold function to obtain pattern classifiers called Perceptrons [R58][R61][DLH]. Astonishingly, the authors [R58][R61] seemed unaware of the much earlier NN (1795-1805) famously known in the field of statistics as "method of least squares" or "linear regression." Remarkably, today's most frequently used 2-layer NNs are those of Gauss & Legendre, not those of the 1940s [MC43] and 1950s [R58] (which were not even differentiable)! SELECTED REFERENCES (many additional references in [NN25] - see link above): [CAJ88] S. R. Cajal. Estructura de los centros nerviosos de las aves. Rev. Trim. Histol. Norm. Patol., 1 (1888), pp. 1-10. [CAJ88b] S. R. Cajal. Sobre las fibras nerviosas de la capa molecular del cerebelo. Rev. Trim. Histol. Norm. Patol., 1 (1888), pp. 33-49. [CAJ89] Conexión general de los elementos nerviosos. Med. Práct., 2 (1889), pp. 341-346. [CAJ06] F. López-Muñoz, J. Boya b, C. Alamo (2006). Neuron theory, the cornerstone of neuroscience, on the centenary of the Nobel Prize award to Santiago Ramón y Cajal. Brain Research Bulletin, Volume 70, Issues 4–6, 16 October 2006, Pages 391-405. [CN25] J. Schmidhuber (AI Blog, 2025). Who invented convolutional neural networks? [DEEP1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations. [DEEP1a] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55. [DEEP2] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378. [DL25] J. Schmidhuber. Who invented deep learning? Technical Note IDSIA-16-25, IDSIA, November 2025. [DLH] J. Schmidhuber. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Lugano, Switzerland, 2022. Preprint arXiv:2212.11279. [HEB48] J. Konorski (1948). Conditioned reflexes and neuron organization. Translation from the Polish manuscript under the author's supervision. Cambridge University Press, 1948. Konorski published the so-called "Hebb rule" before Hebb [HEB49]. [HEB49] D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949. Konorski [HEB48] published the so-called "Hebb rule" before Hebb. [MC43] W. S. McCulloch, W. Pitts. A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, p. 115-133, 1943. [NN25] J. Schmidhuber. Who invented artificial neural networks? Technical Note IDSIA-15-25, IDSIA, November 2025. [R58] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386. [R61] Joseph, R. D. (1961). Contributions to perceptron theory. PhD thesis, Cornell Univ. [R62] Rosenblatt, F. (1962). Principles of Neurodynamics. Spartan, New York. [ST61] K. Steinbuch. Die Lernmatrix. (The learning matrix.) Kybernetik, 1(1):36-45, 1961. [TUR1] A. M. Turing. Intelligent Machinery. Unpublished Technical Report, 1948. In: Ince DC, editor. Collected works of AM Turing—Mechanical Intelligence. Elsevier Science Publishers, 1992. [STI81] S. M. Stigler. Gauss and the Invention of Least Squares. Ann. Stat. 9(3):465-474, 1981. [WID62] Widrow, B. and Hoff, M. (1962). Associative storage and retrieval of digital information in networks of adaptive neurons. Biological Prototypes and Synthetic Systems, 1:160, 1962.

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Sun Nov 30 16:00:02
  • Previous
  • 1
  • More pages
  • 1969
  • 1970
  • 1971
  • More pages
  • 5634
  • Next