LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @Brartxrp: 很多人以为,公司最害怕的是“被曝光的内幕”。其实不是。
真正让他们夜不能寐的,从来不是你知道了什么,而是——你说出去之后,他们还能不能控制住局面。

所以你会发现一个很讽刺的现象:
真正让项目崩盘的,往往不是“假消息”,而是那种——人人看得懂,但官方…

RT @Brartxrp: 很多人以为,公司最害怕的是“被曝光的内幕”。其实不是。 真正让他们夜不能寐的,从来不是你知道了什么,而是——你说出去之后,他们还能不能控制住局面。 所以你会发现一个很讽刺的现象: 真正让项目崩盘的,往往不是“假消息”,而是那种——人人看得懂,但官方…

独立开发者 | 个人IP教练 | 帮助新手在X上完成早期成长| 公众号:PandaTalk8

avatar for Mr Panda
Mr Panda
Tue Dec 16 08:08:29
强,阿里刚刚放出了:Wan 2.6,支持角色扮演,给一段人物视频+提示,可以自动完成分镜、表演、配音

就是说给出一段视频,它能保留角色人物、声音、动作,按提示词重新“演戏”

支持一次生成15秒视频
支持音画同步、声音驱动、多镜头自动切换

从生成效果看,整体的画面质感、角色保真度、运镜以及复杂场景处理能力都还不错

利好广告、短剧、AI漫剧等,只需给定连续提示词,即能产出一段完整叙事的短片

#AI视频生成 #阿里Wan #AI电影

强,阿里刚刚放出了:Wan 2.6,支持角色扮演,给一段人物视频+提示,可以自动完成分镜、表演、配音 就是说给出一段视频,它能保留角色人物、声音、动作,按提示词重新“演戏” 支持一次生成15秒视频 支持音画同步、声音驱动、多镜头自动切换 从生成效果看,整体的画面质感、角色保真度、运镜以及复杂场景处理能力都还不错 利好广告、短剧、AI漫剧等,只需给定连续提示词,即能产出一段完整叙事的短片 #AI视频生成 #阿里Wan #AI电影

体验地址:https://t.co/Uhdt7PDBAO API:https://t.co/iCjp0Tjsd6

avatar for AIGCLINK
AIGCLINK
Tue Dec 16 08:07:20
我以为我很懂 prompt engineering , 直到我想完成一系列关于ai 智能体相关技术文章时, 我才明白, 我对prompt 工程也只是一知半解。

---
GPT-2 拥有 15 亿个参数,相比之下,GPT 只有 1.17 亿个参数;
GPT-2 的训练数据规模为 40GB 文本,而 GPT 仅为 4.5GB。

模型规模和训练数据规模的这种数量级提升,带来了一种前所未有的涌现能力(emergent quality):

研究人员不再需要为单一任务对 GPT-2 进行微调,而是可以直接将未经微调的预训练模型应用到具体任务上,并且在很多情况下,其表现甚至优于那些为该任务专门微调过的最先进模型。

GPT-3 在模型规模和训练数据规模上又实现了一次数量级的提升,并伴随着能力上的显著飞跃。

2020 年发表的论文 《Language Models Are Few-Shot Learners》 表明,只要给模型提供少量任务示例(即所谓的 few-shot examples),模型就能够准确复现输入中的模式,从而完成几乎任何你可以想象的、基于语言的任务——而且往往能够得到质量非常高的结果。

正是在这一阶段,人们意识到:通过修改输入内容——也就是提示(prompt)——就可以对模型进行条件约束,使其执行所需的特定任务。提示工程(prompt engineering)正是在这一时刻诞生的。

---

我以为我很懂 prompt engineering , 直到我想完成一系列关于ai 智能体相关技术文章时, 我才明白, 我对prompt 工程也只是一知半解。 --- GPT-2 拥有 15 亿个参数,相比之下,GPT 只有 1.17 亿个参数; GPT-2 的训练数据规模为 40GB 文本,而 GPT 仅为 4.5GB。 模型规模和训练数据规模的这种数量级提升,带来了一种前所未有的涌现能力(emergent quality): 研究人员不再需要为单一任务对 GPT-2 进行微调,而是可以直接将未经微调的预训练模型应用到具体任务上,并且在很多情况下,其表现甚至优于那些为该任务专门微调过的最先进模型。 GPT-3 在模型规模和训练数据规模上又实现了一次数量级的提升,并伴随着能力上的显著飞跃。 2020 年发表的论文 《Language Models Are Few-Shot Learners》 表明,只要给模型提供少量任务示例(即所谓的 few-shot examples),模型就能够准确复现输入中的模式,从而完成几乎任何你可以想象的、基于语言的任务——而且往往能够得到质量非常高的结果。 正是在这一阶段,人们意识到:通过修改输入内容——也就是提示(prompt)——就可以对模型进行条件约束,使其执行所需的特定任务。提示工程(prompt engineering)正是在这一时刻诞生的。 ---

人类就是这样的, 聪明人, 只要你给说一个关键词, 他差不多可以 给你把整件事还原出来。

avatar for Mr Panda
Mr Panda
Tue Dec 16 08:05:59
Starting with a tag in your post makes it a 'reply' and deprioritizes it as normal post for the algorithm..

@nikitabier, can you maybe look into this? Feels like it's an ancient relic from the dark times of Twitter.

Honestly kind of annoying/pointless functionality nowadays.

Starting with a tag in your post makes it a 'reply' and deprioritizes it as normal post for the algorithm.. @nikitabier, can you maybe look into this? Feels like it's an ancient relic from the dark times of Twitter. Honestly kind of annoying/pointless functionality nowadays.

@nikitabier Should only work for replies, not posts*

avatar for Erwin
Erwin
Tue Dec 16 08:02:09
Ok, this is really cool!

EgoX: Generate immersive first-person video from any third-person clip

Contributions:
• We propose a novel framework, EgoX, for synthesizing high-fidelity egocentric video from a single exocentric video by effectively exploiting pretrained video diffusion models.

• We design a unified conditioning strategy that combines exocentric video and egocentric priors through width-wise and channel-wise integration, achieving robust geometric consistency and high-quality generation.

• We introduce geometry-guided self-attention and clean latent representations that selectively focus on view-relevant regions and enhance accurate reconstruction, leading to more coherent egocentric synthesis.

• Extensive qualitative and quantitative experiments demonstrate that EgoX outperforms previous approaches by a large margin, achieving state-of-the-art performance on diverse and challenging exo-to-ego video generation benchmarks.

Ok, this is really cool! EgoX: Generate immersive first-person video from any third-person clip Contributions: • We propose a novel framework, EgoX, for synthesizing high-fidelity egocentric video from a single exocentric video by effectively exploiting pretrained video diffusion models. • We design a unified conditioning strategy that combines exocentric video and egocentric priors through width-wise and channel-wise integration, achieving robust geometric consistency and high-quality generation. • We introduce geometry-guided self-attention and clean latent representations that selectively focus on view-relevant regions and enhance accurate reconstruction, leading to more coherent egocentric synthesis. • Extensive qualitative and quantitative experiments demonstrate that EgoX outperforms previous approaches by a large margin, achieving state-of-the-art performance on diverse and challenging exo-to-ego video generation benchmarks.

Paper: https://t.co/G7QUOQ81nu Project: https://t.co/TthGwqAgBT

avatar for MrNeRF
MrNeRF
Tue Dec 16 08:01:31
My new video.

How to Win Freelance Clients: One AI-Based "Trick"
https://t.co/xS9UktylCq

Not that much about Laravel, but with example of Laravel, Filament, and AI.

You'll also see how much I pay for Laravel Cloud.

My new video. How to Win Freelance Clients: One AI-Based "Trick" https://t.co/xS9UktylCq Not that much about Laravel, but with example of Laravel, Filament, and AI. You'll also see how much I pay for Laravel Cloud.

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Tue Dec 16 07:58:01
  • Previous
  • 1
  • More pages
  • 639
  • 640
  • 641
  • More pages
  • 5634
  • Next