LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @pmddomingos: The path to AGI runs through the things you don’t think about anymore because PyTorch has automated the wrong way to do th…

RT @pmddomingos: The path to AGI runs through the things you don’t think about anymore because PyTorch has automated the wrong way to do th…

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Mon Dec 15 23:15:10
Meanwhile SNet spinoff Nunet is whizzing along getting decentralized AI orchestration infrastructure rolled out all over the place ;)

Meanwhile SNet spinoff Nunet is whizzing along getting decentralized AI orchestration infrastructure rolled out all over the place ;)

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Mon Dec 15 22:35:07
Some observations/thoughts on current LLMs for the fiction-writing use-case...

I mainly use LLMs for math, prototype/research software code, and basic everyday QA/search ... However recently I decided to try using GPT5.2 to prompt an SF/thriller novel.. my first try at LLM fiction since using GPT4.* as a supporting tool in collaborating on a screenplay for an anime' (QASIM, the Quantum ASI Matrix, which is in the works).  

What I started with: Plot, characters, theme, fictional world

Quick reflections on what is better now vs Q1 2025 regarding using LLMs for fiction, and what's the same and what's worse.

Around the same: understanding of character is good, basic prose construction is strong, understanding of plot within a scene is good, understanding of theme and tone is good (as long as tone is close enough to typical for some genre).  Dialogue is still terribly wooden/cliche' and only half-usable at best (by my own aesthetic), though there are some wonderful snippets too.

Worse: 5.2 is extremely PC to a fascist extent, you have to do prompt gymnastics to get it to have a character do hacking, or dress up as a person of a different religion.  This is insane -- do we really want to excise illegal or offensive acts from *fiction*?  I am reminded of Jack Williamson's classic "The Humanoids" where the AI robots want "to serve and protect and guard men from harm" so they ban wood shops (you could cut yourself!) and Shakespeare (so emotionally disturbing!!)....   

Better: The main difference is, the model can now understand the narrative and thematic flow of a whole novel (OK this is not Remembrance of Things Past, but...) and bring this understanding to bear on its scripting of each of the parts.

In the end, if I ever find time to produce this novel (I find writing fiction  a decent way to use the time on long flights without usable wifi for instance ;), what I will do it: Take the LLM's zeroth draft as a guide for structure and rhythm, and re-write almost all of it, but keeping the choice bits from the LLM's production.   

This may actually be useful for me as when writing fiction free-flow I have trouble staying within the mood and structure of any genre, quickly multi-verging into surrealist garden-of-forking-paths-of-consciousness... The LLMs is really good at genre-ness and cliche' and since my instinct is to be overly creative and weir for most readers, using the LLM's structure (as laid out for my particular plot and characters and theme) as an approximate template might be quite helpful... we'll see...

In terms of tech progress, obviously this level of progress in some respects, over a period of 9 months or so, is impressive and dramatic.   For this use-case, to me anecdotally, 5.1 and 5.2 are a big leap past the 4.* or o* models (or any other non-OpenAi models existing concurrently with them).  (GPT5-Pro is also a big leap beyond the prior o1 and o3 for math ... and coding models are getting better and better fast too.. but that's not my topic here..)

If you wanted to churn out competent but cliche' genre fiction, the LLM can probably do it now as well as the famed "median human" and maybe better...   However the lack of progress on rich aesthetic quality is interesting.  After all there is a lot of highly deeply beautiful material online to train from.  But so far it seems that authentic, compelling aesthetics requires a degree of specificity to the work being created, that is not obtained from algorithms that munge together patterns from huge datasets in a shallow way (nor from any other algorithms).  

Whether 2026 or 2027 LLMs will be able to produce aesthetically compelling works of fiction remains to be seen.   I find myself more fascinated, computational creativity wise, by making AIs that can produce aesthetically compelling works out of *their own* lived experience.  Yes commissioning an apropos work of fiction from an LLM author could be cool in that fiction can be a powerful way to communicate important ideas to people who are more emotionally receptive to fiction than nonfiction.   OTOH I enjoy writing fiction and don't especially want an LLM to do the whole thing "for me."    However it  will also certainly be interesting to see what it takes AI architecture wise to pass the "emulating aesthetically compelling human products" milestone.... We are not there yet...

I have worked more with AI for music... our first "Desdemona's Dream" double-album will come out early next year, featuring not just a robot on vocals (singing and spoken-word) but various AI-generated beats and soundscapes in a context of mostly human-jammed music.   In June of this year we did a recording session in Mexico City where about half the songs had more significant AI-composed components...  There as well what I find is: the AI can come up with some rather aesthetically cool or even profound segments and parts, if you prompt it well and select the good bits ... but if you try to get it to produce too much of an overall work, it reverts to cliche' way too much for my own taste.  Not to say you couldn't get a viable new pop song using current music-AI, but I don't think you could get a *classic* pop song, nor a really richly originally meaningful composition in a more complex genre.   (I have some different ideas about how to make AI music composition work well using current tech, but these mix neural models with different sorts of AI... and that's not my topic here either...)

LLMs are a transitional tech between narrow AI and actual AGI, which as most who follow me know, I think will be achieved via different methods (perhaps hybrid systems like Hyperon leveraging LLMs as one component).   So we could just wait till we get actual AGI that will be far less dodgy as a creative collaborator on fiction or music projects.   It may just be a couple years, we'll see.   OTOH experimenting with tools at varying degrees of capability is also fun, and of course is part of an artistic process -- so much of aesthetic creation is always about working around and pushing against the limitations of one's medium and tools.. from the limited vocabulary of natural language (unless you're doing Finnegan's Wake) to the 12 tones of the scale etc. ... the limitations of each new phase of LLMs is part of the scape...

Some observations/thoughts on current LLMs for the fiction-writing use-case... I mainly use LLMs for math, prototype/research software code, and basic everyday QA/search ... However recently I decided to try using GPT5.2 to prompt an SF/thriller novel.. my first try at LLM fiction since using GPT4.* as a supporting tool in collaborating on a screenplay for an anime' (QASIM, the Quantum ASI Matrix, which is in the works). What I started with: Plot, characters, theme, fictional world Quick reflections on what is better now vs Q1 2025 regarding using LLMs for fiction, and what's the same and what's worse. Around the same: understanding of character is good, basic prose construction is strong, understanding of plot within a scene is good, understanding of theme and tone is good (as long as tone is close enough to typical for some genre). Dialogue is still terribly wooden/cliche' and only half-usable at best (by my own aesthetic), though there are some wonderful snippets too. Worse: 5.2 is extremely PC to a fascist extent, you have to do prompt gymnastics to get it to have a character do hacking, or dress up as a person of a different religion. This is insane -- do we really want to excise illegal or offensive acts from *fiction*? I am reminded of Jack Williamson's classic "The Humanoids" where the AI robots want "to serve and protect and guard men from harm" so they ban wood shops (you could cut yourself!) and Shakespeare (so emotionally disturbing!!).... Better: The main difference is, the model can now understand the narrative and thematic flow of a whole novel (OK this is not Remembrance of Things Past, but...) and bring this understanding to bear on its scripting of each of the parts. In the end, if I ever find time to produce this novel (I find writing fiction a decent way to use the time on long flights without usable wifi for instance ;), what I will do it: Take the LLM's zeroth draft as a guide for structure and rhythm, and re-write almost all of it, but keeping the choice bits from the LLM's production. This may actually be useful for me as when writing fiction free-flow I have trouble staying within the mood and structure of any genre, quickly multi-verging into surrealist garden-of-forking-paths-of-consciousness... The LLMs is really good at genre-ness and cliche' and since my instinct is to be overly creative and weir for most readers, using the LLM's structure (as laid out for my particular plot and characters and theme) as an approximate template might be quite helpful... we'll see... In terms of tech progress, obviously this level of progress in some respects, over a period of 9 months or so, is impressive and dramatic. For this use-case, to me anecdotally, 5.1 and 5.2 are a big leap past the 4.* or o* models (or any other non-OpenAi models existing concurrently with them). (GPT5-Pro is also a big leap beyond the prior o1 and o3 for math ... and coding models are getting better and better fast too.. but that's not my topic here..) If you wanted to churn out competent but cliche' genre fiction, the LLM can probably do it now as well as the famed "median human" and maybe better... However the lack of progress on rich aesthetic quality is interesting. After all there is a lot of highly deeply beautiful material online to train from. But so far it seems that authentic, compelling aesthetics requires a degree of specificity to the work being created, that is not obtained from algorithms that munge together patterns from huge datasets in a shallow way (nor from any other algorithms). Whether 2026 or 2027 LLMs will be able to produce aesthetically compelling works of fiction remains to be seen. I find myself more fascinated, computational creativity wise, by making AIs that can produce aesthetically compelling works out of *their own* lived experience. Yes commissioning an apropos work of fiction from an LLM author could be cool in that fiction can be a powerful way to communicate important ideas to people who are more emotionally receptive to fiction than nonfiction. OTOH I enjoy writing fiction and don't especially want an LLM to do the whole thing "for me." However it will also certainly be interesting to see what it takes AI architecture wise to pass the "emulating aesthetically compelling human products" milestone.... We are not there yet... I have worked more with AI for music... our first "Desdemona's Dream" double-album will come out early next year, featuring not just a robot on vocals (singing and spoken-word) but various AI-generated beats and soundscapes in a context of mostly human-jammed music. In June of this year we did a recording session in Mexico City where about half the songs had more significant AI-composed components... There as well what I find is: the AI can come up with some rather aesthetically cool or even profound segments and parts, if you prompt it well and select the good bits ... but if you try to get it to produce too much of an overall work, it reverts to cliche' way too much for my own taste. Not to say you couldn't get a viable new pop song using current music-AI, but I don't think you could get a *classic* pop song, nor a really richly originally meaningful composition in a more complex genre. (I have some different ideas about how to make AI music composition work well using current tech, but these mix neural models with different sorts of AI... and that's not my topic here either...) LLMs are a transitional tech between narrow AI and actual AGI, which as most who follow me know, I think will be achieved via different methods (perhaps hybrid systems like Hyperon leveraging LLMs as one component). So we could just wait till we get actual AGI that will be far less dodgy as a creative collaborator on fiction or music projects. It may just be a couple years, we'll see. OTOH experimenting with tools at varying degrees of capability is also fun, and of course is part of an artistic process -- so much of aesthetic creation is always about working around and pushing against the limitations of one's medium and tools.. from the limited vocabulary of natural language (unless you're doing Finnegan's Wake) to the 12 tones of the scale etc. ... the limitations of each new phase of LLMs is part of the scape...

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Sun Dec 14 15:54:15
Beautifully high-quality new live Buckethead performance inside a cave in Tennessee... how the hell did I miss this??  ;o ... https://t.co/EN8t2k0WEE

Beautifully high-quality new live Buckethead performance inside a cave in Tennessee... how the hell did I miss this?? ;o ... https://t.co/EN8t2k0WEE

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Fri Dec 12 17:30:22
Our little Mind-Children robot Codey showing off at the Humanoids Summit (his moment of glory is around 1:30) ... https://t.co/piBb6QZOPF ....

I didn't go to the event but am told it was the only robot there who was verbally interacting with people...

We are developing Codey as an AGI R&D platform, and also as a commercial social/emotional robot for (initially) the education and healthcare markets, with likely first rollout in South Korea...

@SingularityNET @ASI_Alliance @OpenCog

Our little Mind-Children robot Codey showing off at the Humanoids Summit (his moment of glory is around 1:30) ... https://t.co/piBb6QZOPF .... I didn't go to the event but am told it was the only robot there who was verbally interacting with people... We are developing Codey as an AGI R&D platform, and also as a commercial social/emotional robot for (initially) the education and healthcare markets, with likely first rollout in South Korea... @SingularityNET @ASI_Alliance @OpenCog

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Fri Dec 12 16:10:48
RT @pmddomingos: ICML and NeurIPS should merge into a single conference called ICPS (International Conference on Plagiarizing Schmidhuber).

RT @pmddomingos: ICML and NeurIPS should merge into a single conference called ICPS (International Conference on Plagiarizing Schmidhuber).

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Thu Dec 11 00:10:00
  • Previous
  • 1
  • 2
  • 3
  • Next