LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

cloning my couples app where you can send doodle notes and making it a family group chat app version 

lets see if i can get it pushed out for review by end of day (will post updates as i go)

cloning my couples app where you can send doodle notes and making it a family group chat app version lets see if i can get it pushed out for review by end of day (will post updates as i go)

curious guy creating things @ https://t.co/HXWladhJaA - up and coming wife guy

avatar for jack friks
jack friks
Sun Dec 14 16:00:43
Social media are full of misinformation about AI history. To all "AI influencers:" before you post your next piece, take history lessons from the AI Blog, with chapters on:
Who invented artificial neural networks? 1795-1805
Who invented deep learning? 1965
Who invented backpropagation? 1676-1970
Who invented convolutional neural nets? 1979-1988
Who invented generative adversarial networks? 1990
Who invented Transformer neural networks? 1991-2017
Who invented deep residual learning? 1991-2015
Who invented neural knowledge distillation? 1991
Who invented the transistor? 1925
Who invented the integrated circuit? 1949
Who created  the general purpose computer? 1936-1941
Who founded theoretical CS and AI theory? 1931-34
And many more ...

Social media are full of misinformation about AI history. To all "AI influencers:" before you post your next piece, take history lessons from the AI Blog, with chapters on: Who invented artificial neural networks? 1795-1805 Who invented deep learning? 1965 Who invented backpropagation? 1676-1970 Who invented convolutional neural nets? 1979-1988 Who invented generative adversarial networks? 1990 Who invented Transformer neural networks? 1991-2017 Who invented deep residual learning? 1991-2015 Who invented neural knowledge distillation? 1991 Who invented the transistor? 1925 Who invented the integrated circuit? 1949 Who created the general purpose computer? 1936-1941 Who founded theoretical CS and AI theory? 1931-34 And many more ...

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Sun Dec 14 16:00:08
Some observations/thoughts on current LLMs for the fiction-writing use-case...

I mainly use LLMs for math, prototype/research software code, and basic everyday QA/search ... However recently I decided to try using GPT5.2 to prompt an SF/thriller novel.. my first try at LLM fiction since using GPT4.* as a supporting tool in collaborating on a screenplay for an anime' (QASIM, the Quantum ASI Matrix, which is in the works).  

What I started with: Plot, characters, theme, fictional world

Quick reflections on what is better now vs Q1 2025 regarding using LLMs for fiction, and what's the same and what's worse.

Around the same: understanding of character is good, basic prose construction is strong, understanding of plot within a scene is good, understanding of theme and tone is good (as long as tone is close enough to typical for some genre).  Dialogue is still terribly wooden/cliche' and only half-usable at best (by my own aesthetic), though there are some wonderful snippets too.

Worse: 5.2 is extremely PC to a fascist extent, you have to do prompt gymnastics to get it to have a character do hacking, or dress up as a person of a different religion.  This is insane -- do we really want to excise illegal or offensive acts from *fiction*?  I am reminded of Jack Williamson's classic "The Humanoids" where the AI robots want "to serve and protect and guard men from harm" so they ban wood shops (you could cut yourself!) and Shakespeare (so emotionally disturbing!!)....   

Better: The main difference is, the model can now understand the narrative and thematic flow of a whole novel (OK this is not Remembrance of Things Past, but...) and bring this understanding to bear on its scripting of each of the parts.

In the end, if I ever find time to produce this novel (I find writing fiction  a decent way to use the time on long flights without usable wifi for instance ;), what I will do it: Take the LLM's zeroth draft as a guide for structure and rhythm, and re-write almost all of it, but keeping the choice bits from the LLM's production.   

This may actually be useful for me as when writing fiction free-flow I have trouble staying within the mood and structure of any genre, quickly multi-verging into surrealist garden-of-forking-paths-of-consciousness... The LLMs is really good at genre-ness and cliche' and since my instinct is to be overly creative and weir for most readers, using the LLM's structure (as laid out for my particular plot and characters and theme) as an approximate template might be quite helpful... we'll see...

In terms of tech progress, obviously this level of progress in some respects, over a period of 9 months or so, is impressive and dramatic.   For this use-case, to me anecdotally, 5.1 and 5.2 are a big leap past the 4.* or o* models (or any other non-OpenAi models existing concurrently with them).  (GPT5-Pro is also a big leap beyond the prior o1 and o3 for math ... and coding models are getting better and better fast too.. but that's not my topic here..)

If you wanted to churn out competent but cliche' genre fiction, the LLM can probably do it now as well as the famed "median human" and maybe better...   However the lack of progress on rich aesthetic quality is interesting.  After all there is a lot of highly deeply beautiful material online to train from.  But so far it seems that authentic, compelling aesthetics requires a degree of specificity to the work being created, that is not obtained from algorithms that munge together patterns from huge datasets in a shallow way (nor from any other algorithms).  

Whether 2026 or 2027 LLMs will be able to produce aesthetically compelling works of fiction remains to be seen.   I find myself more fascinated, computational creativity wise, by making AIs that can produce aesthetically compelling works out of *their own* lived experience.  Yes commissioning an apropos work of fiction from an LLM author could be cool in that fiction can be a powerful way to communicate important ideas to people who are more emotionally receptive to fiction than nonfiction.   OTOH I enjoy writing fiction and don't especially want an LLM to do the whole thing "for me."    However it  will also certainly be interesting to see what it takes AI architecture wise to pass the "emulating aesthetically compelling human products" milestone.... We are not there yet...

I have worked more with AI for music... our first "Desdemona's Dream" double-album will come out early next year, featuring not just a robot on vocals (singing and spoken-word) but various AI-generated beats and soundscapes in a context of mostly human-jammed music.   In June of this year we did a recording session in Mexico City where about half the songs had more significant AI-composed components...  There as well what I find is: the AI can come up with some rather aesthetically cool or even profound segments and parts, if you prompt it well and select the good bits ... but if you try to get it to produce too much of an overall work, it reverts to cliche' way too much for my own taste.  Not to say you couldn't get a viable new pop song using current music-AI, but I don't think you could get a *classic* pop song, nor a really richly originally meaningful composition in a more complex genre.   (I have some different ideas about how to make AI music composition work well using current tech, but these mix neural models with different sorts of AI... and that's not my topic here either...)

LLMs are a transitional tech between narrow AI and actual AGI, which as most who follow me know, I think will be achieved via different methods (perhaps hybrid systems like Hyperon leveraging LLMs as one component).   So we could just wait till we get actual AGI that will be far less dodgy as a creative collaborator on fiction or music projects.   It may just be a couple years, we'll see.   OTOH experimenting with tools at varying degrees of capability is also fun, and of course is part of an artistic process -- so much of aesthetic creation is always about working around and pushing against the limitations of one's medium and tools.. from the limited vocabulary of natural language (unless you're doing Finnegan's Wake) to the 12 tones of the scale etc. ... the limitations of each new phase of LLMs is part of the scape...

Some observations/thoughts on current LLMs for the fiction-writing use-case... I mainly use LLMs for math, prototype/research software code, and basic everyday QA/search ... However recently I decided to try using GPT5.2 to prompt an SF/thriller novel.. my first try at LLM fiction since using GPT4.* as a supporting tool in collaborating on a screenplay for an anime' (QASIM, the Quantum ASI Matrix, which is in the works). What I started with: Plot, characters, theme, fictional world Quick reflections on what is better now vs Q1 2025 regarding using LLMs for fiction, and what's the same and what's worse. Around the same: understanding of character is good, basic prose construction is strong, understanding of plot within a scene is good, understanding of theme and tone is good (as long as tone is close enough to typical for some genre). Dialogue is still terribly wooden/cliche' and only half-usable at best (by my own aesthetic), though there are some wonderful snippets too. Worse: 5.2 is extremely PC to a fascist extent, you have to do prompt gymnastics to get it to have a character do hacking, or dress up as a person of a different religion. This is insane -- do we really want to excise illegal or offensive acts from *fiction*? I am reminded of Jack Williamson's classic "The Humanoids" where the AI robots want "to serve and protect and guard men from harm" so they ban wood shops (you could cut yourself!) and Shakespeare (so emotionally disturbing!!).... Better: The main difference is, the model can now understand the narrative and thematic flow of a whole novel (OK this is not Remembrance of Things Past, but...) and bring this understanding to bear on its scripting of each of the parts. In the end, if I ever find time to produce this novel (I find writing fiction a decent way to use the time on long flights without usable wifi for instance ;), what I will do it: Take the LLM's zeroth draft as a guide for structure and rhythm, and re-write almost all of it, but keeping the choice bits from the LLM's production. This may actually be useful for me as when writing fiction free-flow I have trouble staying within the mood and structure of any genre, quickly multi-verging into surrealist garden-of-forking-paths-of-consciousness... The LLMs is really good at genre-ness and cliche' and since my instinct is to be overly creative and weir for most readers, using the LLM's structure (as laid out for my particular plot and characters and theme) as an approximate template might be quite helpful... we'll see... In terms of tech progress, obviously this level of progress in some respects, over a period of 9 months or so, is impressive and dramatic. For this use-case, to me anecdotally, 5.1 and 5.2 are a big leap past the 4.* or o* models (or any other non-OpenAi models existing concurrently with them). (GPT5-Pro is also a big leap beyond the prior o1 and o3 for math ... and coding models are getting better and better fast too.. but that's not my topic here..) If you wanted to churn out competent but cliche' genre fiction, the LLM can probably do it now as well as the famed "median human" and maybe better... However the lack of progress on rich aesthetic quality is interesting. After all there is a lot of highly deeply beautiful material online to train from. But so far it seems that authentic, compelling aesthetics requires a degree of specificity to the work being created, that is not obtained from algorithms that munge together patterns from huge datasets in a shallow way (nor from any other algorithms). Whether 2026 or 2027 LLMs will be able to produce aesthetically compelling works of fiction remains to be seen. I find myself more fascinated, computational creativity wise, by making AIs that can produce aesthetically compelling works out of *their own* lived experience. Yes commissioning an apropos work of fiction from an LLM author could be cool in that fiction can be a powerful way to communicate important ideas to people who are more emotionally receptive to fiction than nonfiction. OTOH I enjoy writing fiction and don't especially want an LLM to do the whole thing "for me." However it will also certainly be interesting to see what it takes AI architecture wise to pass the "emulating aesthetically compelling human products" milestone.... We are not there yet... I have worked more with AI for music... our first "Desdemona's Dream" double-album will come out early next year, featuring not just a robot on vocals (singing and spoken-word) but various AI-generated beats and soundscapes in a context of mostly human-jammed music. In June of this year we did a recording session in Mexico City where about half the songs had more significant AI-composed components... There as well what I find is: the AI can come up with some rather aesthetically cool or even profound segments and parts, if you prompt it well and select the good bits ... but if you try to get it to produce too much of an overall work, it reverts to cliche' way too much for my own taste. Not to say you couldn't get a viable new pop song using current music-AI, but I don't think you could get a *classic* pop song, nor a really richly originally meaningful composition in a more complex genre. (I have some different ideas about how to make AI music composition work well using current tech, but these mix neural models with different sorts of AI... and that's not my topic here either...) LLMs are a transitional tech between narrow AI and actual AGI, which as most who follow me know, I think will be achieved via different methods (perhaps hybrid systems like Hyperon leveraging LLMs as one component). So we could just wait till we get actual AGI that will be far less dodgy as a creative collaborator on fiction or music projects. It may just be a couple years, we'll see. OTOH experimenting with tools at varying degrees of capability is also fun, and of course is part of an artistic process -- so much of aesthetic creation is always about working around and pushing against the limitations of one's medium and tools.. from the limited vocabulary of natural language (unless you're doing Finnegan's Wake) to the 12 tones of the scale etc. ... the limitations of each new phase of LLMs is part of the scape...

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Sun Dec 14 15:54:15
这个项目在 GitHub 要到 10K 了,这么强的吗。

Next AI Draw 一个基于 Next.js 的 Web 应用,将 AI 功能与强大的 Draw 图表工具集成在一起,能够通过自然语言指令和 AI 辅助的可视化功能,轻松地创建、修改和增强各种图表。

别再当画图的工具人了,让 AI 当你的工具人。
https://t.co/aaOMghvIwm

这个项目在 GitHub 要到 10K 了,这么强的吗。 Next AI Draw 一个基于 Next.js 的 Web 应用,将 AI 功能与强大的 Draw 图表工具集成在一起,能够通过自然语言指令和 AI 辅助的可视化功能,轻松地创建、修改和增强各种图表。 别再当画图的工具人了,让 AI 当你的工具人。 https://t.co/aaOMghvIwm

🧠在家居士 | 🥦素食者 | 🏃🏻马拉松爱好者 | 💰省钱小能手 | 搭🪜技术资深学者 | 👨‍💻科技宅 | 🆕更新狂 | 🆅 六边型战五渣

avatar for Geek
Geek
Sun Dec 14 15:46:38
Someone smart made a great analogy to me that I told him I’d steal so here it is: RLing a model to do a specific (benchmarkable) task is like finding a chemical compound that has a specific medical effect. It may or may not work for other, even non-adjacent tasks — you can only learn what else it’s good for (or any side effects) by experimenting.

Someone smart made a great analogy to me that I told him I’d steal so here it is: RLing a model to do a specific (benchmarkable) task is like finding a chemical compound that has a specific medical effect. It may or may not work for other, even non-adjacent tasks — you can only learn what else it’s good for (or any side effects) by experimenting.

Author. Coder. CTO. θηριομάχης. Building: https://t.co/otXT4Wy6WR. Writing: https://t.co/dBPBtyCIHw.

avatar for Jon Stokes
Jon Stokes
Sun Dec 14 15:43:24
I wrote about exactly this a while back. It has been obvious for a while IMO https://t.co/SIe1Fewlx6

I wrote about exactly this a while back. It has been obvious for a while IMO https://t.co/SIe1Fewlx6

Someone smart made a great analogy to me that I told him I’d steal so here it is: RLing a model to do a specific (benchmarkable) task is like finding a chemical compound that has a specific medical effect. It may or may not work for other, even non-adjacent tasks — you can only learn what else it’s good for (or any side effects) by experimenting.

avatar for Jon Stokes
Jon Stokes
Sun Dec 14 15:38:53
  • Previous
  • 1
  • More pages
  • 772
  • 773
  • 774
  • More pages
  • 5634
  • Next