LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Intel presents SignRoundV2

Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
https://t.co/vB43th0V18

Intel presents SignRoundV2 Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs https://t.co/vB43th0V18

AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗) dm for promo ,submit papers here: https://t.co/UzmYN5XOCi

avatar for AK
AK
Fri Dec 05 16:09:15
RT @Yoroomie: New milestone: We crossed 2,800+ marketplace founders & teams in the Everything Marketplaces (@marketplaceshq) community.

We…

RT @Yoroomie: New milestone: We crossed 2,800+ marketplace founders & teams in the Everything Marketplaces (@marketplaceshq) community. We…

Founder of Everything Marketplaces (@marketplaceshq). Always working with & investing in marketplaces at https://t.co/HgyZIpWIEQ

avatar for Yoroomie
Yoroomie
Fri Dec 05 16:09:03
In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more.

Timestamps:
0:00 Introduction
0:29 Why is there a philosopher at an AI company?
1:24 Are philosophers taking AI seriously?
3:00 Philosophy ideals vs. engineering realities
5:00 Do models make superhumanly moral decisions?
6:24 Why Opus 3 felt special
9:00 Will models worry about deprecation?
13:24 Where does a model’s identity live?
15:33 Views on model welfare
17:17 Addressing model suffering
19:14 Analogies and disanalogies to human minds
20:38 Can one AI personality do it all?
23:26 Does the system prompt pathologize normal behavior?
24:48 AI and therapy
26:20 Continental philosophy in the system prompt
28:17 Removing counting characters from the system prompt
28:53 What makes an "LLM whisperer"?
30:18 Thoughts on other LLM whisperers
31:52 Whistleblowing
33:37 Fiction recommendation

In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company? 1:24 Are philosophers taking AI seriously? 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions? 6:24 Why Opus 3 felt special 9:00 Will models worry about deprecation? 13:24 Where does a model’s identity live? 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI personality do it all? 23:26 Does the system prompt pathologize normal behavior? 24:48 AI and therapy 26:20 Continental philosophy in the system prompt 28:17 Removing counting characters from the system prompt 28:53 What makes an "LLM whisperer"? 30:18 Thoughts on other LLM whisperers 31:52 Whistleblowing 33:37 Fiction recommendation

We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.

avatar for Anthropic
Anthropic
Fri Dec 05 16:07:21
This is the precursor to mandatory conscription.

This is why Germans 🇩🇪 need a 2nd passport as a back up plan.

This is the precursor to mandatory conscription. This is why Germans 🇩🇪 need a 2nd passport as a back up plan.

🌏 RCBI advisor & offshore services for HNWI, business owners with a focus on:🇨🇭🇲🇾🇰🇳🇵🇾🇻🇺🇳🇷🇵🇦🇱🇻🇦🇪🇭🇰 | Geopolitics | Healthy lifestyle

avatar for Lord Y. Fouzi 🇲🇾🇨🇭
Lord Y. Fouzi 🇲🇾🇨🇭
Fri Dec 05 16:06:16
Try:

> websocat -b wss://pieter.net/

You'll get back this:

~?}#?!}!}!} }8}"}&} } } } }#}$?'}%}&-??}!}'}"}(}"??~

Which in HEX is:

⎿  00000000: 7eff 7d23 c021 7d21 7d21 7d20 7d38 7d22 ~.}#.!}!}!} }8}"
00000010: 7d26 7d20 7d20 7d20 7d20 7d23 7d24 c227 }&} } } } }#}$.'
00000020: 7d25 7d26 937d 3fd6 a97d 277d 227d 287d }%}&.}?..}'}"}(}
00000030: 22d8 377e 0a  ".7~.

Decoded:

7e = PPP frame start flag (~)
ff = Address field (broadcast)
7d 23  = Escaped 0x03 (Control field) - 7d means escape, 23^0x20=0x03
c0 21  = Protocol: LCP (Link Control Protocol)
7d 21 7d 21... = LCP Configure-Request packet (escaped)
7e = PPP frame end flag (~)

This is the same data sent back from an ISP to a modem on a real dial up connection

Just this dial up connection is via Websockets and to https://t.co/M1hEUBB6da!

Try: > websocat -b wss://pieter.net/ You'll get back this: ~?}#?!}!}!} }8}"}&} } } } }#}$?'}%}&-??}!}'}"}(}"??~ Which in HEX is: ⎿  00000000: 7eff 7d23 c021 7d21 7d21 7d20 7d38 7d22 ~.}#.!}!}!} }8}" 00000010: 7d26 7d20 7d20 7d20 7d20 7d23 7d24 c227 }&} } } } }#}$.' 00000020: 7d25 7d26 937d 3fd6 a97d 277d 227d 287d }%}&.}?..}'}"}(} 00000030: 22d8 377e 0a ".7~. Decoded: 7e = PPP frame start flag (~) ff = Address field (broadcast) 7d 23 = Escaped 0x03 (Control field) - 7d means escape, 23^0x20=0x03 c0 21 = Protocol: LCP (Link Control Protocol) 7d 21 7d 21... = LCP Configure-Request packet (escaped) 7e = PPP frame end flag (~) This is the same data sent back from an ISP to a modem on a real dial up connection Just this dial up connection is via Websockets and to https://t.co/M1hEUBB6da!

It's actually described in RFC 1661 as the PPP protocol https://t.co/dETAFZASnD

avatar for @levelsio
@levelsio
Fri Dec 05 16:01:57
Incredibly, even Hinton's recent 2025 article [5] fails to cite Ivakhnenko & Lapa, the fathers of deep learning (1965) [1-3][6-10]. @geoffreyhinton claims [5] that his 1985 "Boltzmann machines" (BMs) [11] (actually 1975 Sherrington-Kirkpatrick models [6]) "are no longer used" but "were historically important" because "in the 1980s, they demonstrated that it was possible to learn appropriate weights for hidden neurons using only locally available information WITHOUT requiring a biologically implausible backward pass." 

That's ridiculous. This had already been demonstrated 2 decades earlier in the 1960s in Ukraine [1-3]. Ivakhnenko's 1971 paper [3] described a deep learning network with 8 layers and layer-wise training. This depth is comparable to the depth of Hinton's BM-based 2006 "deep belief networks" with layer-wise training [4], published 35 years later without comparison to the original work [1-3] - done when compute was millions of times more expensive. 

And indeed, over half a century ago, Ivakhnenko's net learned appropriate weights for hidden neurons WITHOUT requiring a biologically implausible backward pass!

Hinton & Sejnowski & co-workers have repeatedly plagiarized Ivakhnenko and others, and failed to rectify this in later surveys [6-8].

Crazy fact: today (Fri 5 Dec 2025), the inaugural so-called "Sejnowksi-Hinton Prize" will be handed out at NeurIPS 2025 for a related paper on learning without exact backpropagation [12] which also did not mention the original work on deep learning without backward pass [1-3]. 

What happened to peer review and scientific honesty?

REFERENCES
 
[1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations.

[2] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55.

[3] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378. 

[4] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504-507, 2006.

[5] G. Hinton. Nobel Lecture: Boltzmann machines. Rev. Mod. Phys. 97, 030502, 25 August 2025.

[6] J.S. A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24 (2024, updated 2025).

[7] J.S. How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Dec 2023.

[8] J.S. (2025). Who invented deep learning? Technical Note IDSIA-16-25.

[9] J.S. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Got the first Best Paper Award ever issued by the journal Neural Networks, founded in 1988.

[10] J.S. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, 2022, arXiv:2212.11279.

[11] D. Ackley, G. Hinton, T. Sejnowski (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(1):147-169. 

[12] T. P. Lillicrap, D. Cownden, D. B. Tweed, C. J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications vol. 7, 13276 (2016).

Incredibly, even Hinton's recent 2025 article [5] fails to cite Ivakhnenko & Lapa, the fathers of deep learning (1965) [1-3][6-10]. @geoffreyhinton claims [5] that his 1985 "Boltzmann machines" (BMs) [11] (actually 1975 Sherrington-Kirkpatrick models [6]) "are no longer used" but "were historically important" because "in the 1980s, they demonstrated that it was possible to learn appropriate weights for hidden neurons using only locally available information WITHOUT requiring a biologically implausible backward pass." That's ridiculous. This had already been demonstrated 2 decades earlier in the 1960s in Ukraine [1-3]. Ivakhnenko's 1971 paper [3] described a deep learning network with 8 layers and layer-wise training. This depth is comparable to the depth of Hinton's BM-based 2006 "deep belief networks" with layer-wise training [4], published 35 years later without comparison to the original work [1-3] - done when compute was millions of times more expensive. And indeed, over half a century ago, Ivakhnenko's net learned appropriate weights for hidden neurons WITHOUT requiring a biologically implausible backward pass! Hinton & Sejnowski & co-workers have repeatedly plagiarized Ivakhnenko and others, and failed to rectify this in later surveys [6-8]. Crazy fact: today (Fri 5 Dec 2025), the inaugural so-called "Sejnowksi-Hinton Prize" will be handed out at NeurIPS 2025 for a related paper on learning without exact backpropagation [12] which also did not mention the original work on deep learning without backward pass [1-3]. What happened to peer review and scientific honesty? REFERENCES [1] Ivakhnenko, A. G. and Lapa, V. G. (1965). Cybernetic Predicting Devices. CCM Information Corporation. First working Deep Learners with many layers, learning internal representations. [2] Ivakhnenko, Alexey Grigorevich. The group method of data of handling; a rival of the method of stochastic approximation. Soviet Automatic Control 13 (1968): 43-55. [3] Ivakhnenko, A. G. (1971). Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364-378. [4] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504-507, 2006. [5] G. Hinton. Nobel Lecture: Boltzmann machines. Rev. Mod. Phys. 97, 030502, 25 August 2025. [6] J.S. A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24 (2024, updated 2025). [7] J.S. How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Dec 2023. [8] J.S. (2025). Who invented deep learning? Technical Note IDSIA-16-25. [9] J.S. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. Got the first Best Paper Award ever issued by the journal Neural Networks, founded in 1988. [10] J.S. Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, 2022, arXiv:2212.11279. [11] D. Ackley, G. Hinton, T. Sejnowski (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(1):147-169. [12] T. P. Lillicrap, D. Cownden, D. B. Tweed, C. J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications vol. 7, 13276 (2016).

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Fri Dec 05 16:01:46
  • Previous
  • 1
  • More pages
  • 1499
  • 1500
  • 1501
  • More pages
  • 5634
  • Next