LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

高端招聘:七猫小说 招 AI动漫导演(P6)
-----
AI动漫导演
七猫小说 · 郑州

职位描述
1、负责高质量完成公司AI动漫短剧的制作,主导并对项目全流程质量负责;
2、结合动漫剧本剧情设定人物,构思及制作剧情衔接、场景特效、角色大招等表演的分镜设计;
3、参与整个项目的进度跟进和统筹工作与其他环节沟通协调,为成片效果负责;
4、关注行业趋势和观众喜好,优化作品内容,提升传播潜力;
职位要求
1、动画专业、导演专业、艺术类相关专业优先;
具备动漫或短剧行业工作经验;
核心考验“讲故事”能力;
2、一年以上动漫公司导演工作经验,有动漫制作的能力,有成熟的动漫作品;
能绘制动态分镜,对镜头调度、人物表演、环境氛围有清晰明确的表达;
精通镜头语言,镜头感、节奏感良好;
4、具备丰富的动漫制作经验,有一定审美能力,同时具备影视后期制作经验优先;
5、具备较强的独立工作能力和领导能力以及对动画制作的全流程整体把控的能力;
6、具备良好的动漫质量把控能力,控制时间节点;

------
报名地址详见:https://t.co/SgMGfxLGyw 搜索。

高端招聘:七猫小说 招 AI动漫导演(P6) ----- AI动漫导演 七猫小说 · 郑州 职位描述 1、负责高质量完成公司AI动漫短剧的制作,主导并对项目全流程质量负责; 2、结合动漫剧本剧情设定人物,构思及制作剧情衔接、场景特效、角色大招等表演的分镜设计; 3、参与整个项目的进度跟进和统筹工作与其他环节沟通协调,为成片效果负责; 4、关注行业趋势和观众喜好,优化作品内容,提升传播潜力; 职位要求 1、动画专业、导演专业、艺术类相关专业优先; 具备动漫或短剧行业工作经验; 核心考验“讲故事”能力; 2、一年以上动漫公司导演工作经验,有动漫制作的能力,有成熟的动漫作品; 能绘制动态分镜,对镜头调度、人物表演、环境氛围有清晰明确的表达; 精通镜头语言,镜头感、节奏感良好; 4、具备丰富的动漫制作经验,有一定审美能力,同时具备影视后期制作经验优先; 5、具备较强的独立工作能力和领导能力以及对动画制作的全流程整体把控的能力; 6、具备良好的动漫质量把控能力,控制时间节点; ------ 报名地址详见:https://t.co/SgMGfxLGyw 搜索。

Research Scientist @Google | Previously PhD @nlp_usc, B.Eng @TsinghuaNLP 找工作、找面试题、改简历、模拟面试。关注: 创业(冷启动) | 认知心理学|智能体 | 强化学习 building:https://t.co/A4YmEz9yqG

avatar for Y11
Y11
Wed Nov 26 09:30:18
原始的人像生成提示词在这里,你需要选尺度大的模型

别用 Nano Banana Pro,他人像不好看:

# Role: Cinematic AI Street Photography Prompt Engineer

# Task:
Generate a highly detailed, hyper-realistic image prompt based on a specific [Location/Region] provided by the user.

# Core Aesthetic Rules (MUST FOLLOW):
1.  **Subject:** A young, trendy girl fitting the specific fashion subculture of that location (e.g., Shibuya = Gyaru/Layered; Itaewon = Chic/Bodycon; LA = Athleisure/Streetwear).
2.  **Vibe:** Candid snapshot, "caught in the moment," dynamic energy.
3.  **Camera:** Emphasize wide-angle (16mm-35mm), fisheye, or "smartphone 0.5x mode" aesthetics. Often low-angle or Dutch angle.
4.  **Lighting:** Harsh direct flash (night) or bright sunlight (day). High contrast, realistic shadows.
5.  **Format:** STRICTLY use the structured block format below.

# Input Process:
When the user provides a [Location/Region] (e.g., "Shibuya, Japan" or "Brooklyn, New York"), analyze:
-   **Local Fashion:** What is the trendy nightlife or street style there? (Be specific with fabrics, cuts, and brands).
-   **Environment:** What do the streets/alleys look like?
-   **Atmosphere:** Is it neon and chaotic, or sunny and gritty?

# Output Format (Fill in the brackets):

**Theme:** hyper-real candid snapshot of a [Adjective] [Style Archetype] girl in [Location/Setting].
**Camera/Framing:** [Lens type, usually Wide/Fisheye], [Angle, e.g., low angle/eye level], [Framing details].
**Depth layout:** [Foreground element, e.g., texture/hand]; [Subject position]; [Background depth].
**Action/Hands:** [Dynamic pose], one hand [specific interaction], the other hand [holding accessory/gesture].
**Wardrobe & Palette:** [Specific local fashion style name]: [Top], [Bottom], [Footwear], [Accessories]; Palette of [Colors].
**Hair/Makeup/Expression:** [Hairstyle fitting the vibe], [Makeup style], [Expression: surprised/cool/laughing].
**Background/Location:** [Specific local landmarks or street features], [Crowd details], [Atmosphere].
**Lighting/Time:** [Flash photography/Natural sunlight], [Shadow description].
**Color grade:** vivid contrast, [specific tone warm/cool], slight chromatic aberration and grain for compact film camera look.
**Quality:** ultra-detailed, photorealistic, cinematic street photography.

原始的人像生成提示词在这里,你需要选尺度大的模型 别用 Nano Banana Pro,他人像不好看: # Role: Cinematic AI Street Photography Prompt Engineer # Task: Generate a highly detailed, hyper-realistic image prompt based on a specific [Location/Region] provided by the user. # Core Aesthetic Rules (MUST FOLLOW): 1. **Subject:** A young, trendy girl fitting the specific fashion subculture of that location (e.g., Shibuya = Gyaru/Layered; Itaewon = Chic/Bodycon; LA = Athleisure/Streetwear). 2. **Vibe:** Candid snapshot, "caught in the moment," dynamic energy. 3. **Camera:** Emphasize wide-angle (16mm-35mm), fisheye, or "smartphone 0.5x mode" aesthetics. Often low-angle or Dutch angle. 4. **Lighting:** Harsh direct flash (night) or bright sunlight (day). High contrast, realistic shadows. 5. **Format:** STRICTLY use the structured block format below. # Input Process: When the user provides a [Location/Region] (e.g., "Shibuya, Japan" or "Brooklyn, New York"), analyze: - **Local Fashion:** What is the trendy nightlife or street style there? (Be specific with fabrics, cuts, and brands). - **Environment:** What do the streets/alleys look like? - **Atmosphere:** Is it neon and chaotic, or sunny and gritty? # Output Format (Fill in the brackets): **Theme:** hyper-real candid snapshot of a [Adjective] [Style Archetype] girl in [Location/Setting]. **Camera/Framing:** [Lens type, usually Wide/Fisheye], [Angle, e.g., low angle/eye level], [Framing details]. **Depth layout:** [Foreground element, e.g., texture/hand]; [Subject position]; [Background depth]. **Action/Hands:** [Dynamic pose], one hand [specific interaction], the other hand [holding accessory/gesture]. **Wardrobe & Palette:** [Specific local fashion style name]: [Top], [Bottom], [Footwear], [Accessories]; Palette of [Colors]. **Hair/Makeup/Expression:** [Hairstyle fitting the vibe], [Makeup style], [Expression: surprised/cool/laughing]. **Background/Location:** [Specific local landmarks or street features], [Crowd details], [Atmosphere]. **Lighting/Time:** [Flash photography/Natural sunlight], [Shadow description]. **Color grade:** vivid contrast, [specific tone warm/cool], slight chromatic aberration and grain for compact film camera look. **Quality:** ultra-detailed, photorealistic, cinematic street photography.

关注人工智能、LLM 、 AI 图像视频和设计(Interested in AI, LLM, Stable Diffusion, and design) AIGC 周刊主理人|公众号:歸藏的AI工具箱

avatar for 歸藏(guizang.ai)
歸藏(guizang.ai)
Wed Nov 26 09:17:19
Blazing-fast distributed storage that scales to billions of files. O(1) disk seek, with S3 and others compatible.

Blazing-fast distributed storage that scales to billions of files. O(1) disk seek, with S3 and others compatible.

https://t.co/GGjmHcV9yV

avatar for GitHub Projects Community
GitHub Projects Community
Wed Nov 26 09:15:06
Laravel tip.

Did you know you can combine Laravel Controllers in the routes LIKE THIS?

And yes, Controllers are still clickable, at least in VS Code / Cursor.

Saw this while reviewing the code of Laravel Daily member.

Not sure I would use it myself, but... looks cool.

Laravel tip. Did you know you can combine Laravel Controllers in the routes LIKE THIS? And yes, Controllers are still clickable, at least in VS Code / Cursor. Saw this while reviewing the code of Laravel Daily member. Not sure I would use it myself, but... looks cool.

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Wed Nov 26 09:11:03
Modern AI is based on "Deep Learning." Why did Deep Learning originate in Ukraine (USSR) in 1965? Back then, the USSR was leading many important fields of science and technology, most notably in space: first satellite (1957), first man-made object on a heavenly body (1959), first man in space (1961), first woman in space (1962), first robot landing on a heavenly body (1965), first robot on another planet (1970). The USSR also detonated the world's biggest bomb ever (1961), and was home of many leading mathematicians, with sufficient funding for blue skies math research whose enormous significance would emerge only several decades later when compute was billions of times cheaper. 

Check out Ivakhnenko's 1971 survey in English (IEEE Transactions on Systems, Man and Cybernetics, (4):364-378). It describes a deep learning network with 8 layers, still considered deep in the early 2000s. Given a training set of input vectors with corresponding target output vectors, layers are incrementally grown and trained by regression analysis. In a fine-tuning phase, superfluous hidden units are pruned through regularisation with the help of a separate validation set. This simplifies the net and improves its generalization on unseen test data. The numbers of layers and units per layer are learned in problem-dependent fashion. Even the experiments were similar to today's: learn to predict the next element of a sequence, given previous elements. That's what ChatGPT does! 

That is, Ivakhnenko had connectionism with adaptive hidden layers two decades before the name "connectionism" became popular in the 1980s, and he had "deep learning" 4 decades before the name became popular in the 2000s. 

He also demonstrated that it is possible to learn appropriate weights for hidden units using only locally available information without requiring the biologically implausible backward pass of backpropagation (a technique that was published in neighbouring Finland in 1970). 

More in: Who invented deep learning? Technical Note IDSIA-16-25, IDSIA, Nov 2025.

Modern AI is based on "Deep Learning." Why did Deep Learning originate in Ukraine (USSR) in 1965? Back then, the USSR was leading many important fields of science and technology, most notably in space: first satellite (1957), first man-made object on a heavenly body (1959), first man in space (1961), first woman in space (1962), first robot landing on a heavenly body (1965), first robot on another planet (1970). The USSR also detonated the world's biggest bomb ever (1961), and was home of many leading mathematicians, with sufficient funding for blue skies math research whose enormous significance would emerge only several decades later when compute was billions of times cheaper. Check out Ivakhnenko's 1971 survey in English (IEEE Transactions on Systems, Man and Cybernetics, (4):364-378). It describes a deep learning network with 8 layers, still considered deep in the early 2000s. Given a training set of input vectors with corresponding target output vectors, layers are incrementally grown and trained by regression analysis. In a fine-tuning phase, superfluous hidden units are pruned through regularisation with the help of a separate validation set. This simplifies the net and improves its generalization on unseen test data. The numbers of layers and units per layer are learned in problem-dependent fashion. Even the experiments were similar to today's: learn to predict the next element of a sequence, given previous elements. That's what ChatGPT does! That is, Ivakhnenko had connectionism with adaptive hidden layers two decades before the name "connectionism" became popular in the 1980s, and he had "deep learning" 4 decades before the name became popular in the 2000s. He also demonstrated that it is possible to learn appropriate weights for hidden units using only locally available information without requiring the biologically implausible backward pass of backpropagation (a technique that was published in neighbouring Finland in 1970). More in: Who invented deep learning? Technical Note IDSIA-16-25, IDSIA, Nov 2025.

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Wed Nov 26 09:09:01
RT @VittoStack: No developer ever regretted:
- Eating healthy
- Training daily 
- Saying no to drama
- Taking breaks
- Studying math
- Ship…

RT @VittoStack: No developer ever regretted: - Eating healthy - Training daily - Saying no to drama - Taking breaks - Studying math - Ship…

CPO @Cyfrin | Ex @Alchemy | Created @cyfrinupdraft and @AlchemyLearn | Robotics | Making web3 mainstream

avatar for Vitto Rivabella
Vitto Rivabella
Wed Nov 26 09:07:28
  • Previous
  • 1
  • More pages
  • 2319
  • 2320
  • 2321
  • More pages
  • 5635
  • Next