LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Want to get a weekly curated list of top GitHub repos and similar posts like this?  
Join our newsletter and get them straight to your inbox 👇

https://t.co/fIQKe7W5O3

Want to get a weekly curated list of top GitHub repos and similar posts like this? Join our newsletter and get them straight to your inbox 👇 https://t.co/fIQKe7W5O3

We're sharing/showcasing best of @github projects/repos. Follow to stay in loop. Promoting Open-Source Contributions. UNOFFICIAL, but followed by github

avatar for GitHub Projects Community
GitHub Projects Community
Mon Dec 01 09:21:03
RT @JonathonPSine: Working paper. 

Runs ~500k Chinese graduate dissertations through plagiarism-detection software, then links them to 60k…

RT @JonathonPSine: Working paper. Runs ~500k Chinese graduate dissertations through plagiarism-detection software, then links them to 60k…

ai agents @hud_evals | owned @AIHubCentral (1 million users,acq.) ex climate protester🦦 dont do the deferred life plan

avatar for Minh Nguyen✈️NeurIPS
Minh Nguyen✈️NeurIPS
Mon Dec 01 09:15:58
New Laravel package: cnaebadi/null-replacer
https://t.co/GYrYSlSIjW

Laravel handles `null`, empty strings, and whitespace inconsistently during validation.

The workaround is to override `prepareForValidation()` in every FormRequest.

This package solves it.
What do you think?

New Laravel package: cnaebadi/null-replacer https://t.co/GYrYSlSIjW Laravel handles `null`, empty strings, and whitespace inconsistently during validation. The workaround is to override `prepareForValidation()` in every FormRequest. This package solves it. What do you think?

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Mon Dec 01 09:11:01
this is wild 🤯

was quiet for a while ..
.. now PostSyncer is exploding 

what we did: 
- improve product all across the board
- worked a lot on SEO (ofc using Outrank 😅)
- revamp checkout and onboarding

this is wild 🤯 was quiet for a while .. .. now PostSyncer is exploding what we did: - improve product all across the board - worked a lot on SEO (ofc using Outrank 😅) - revamp checkout and onboarding

Built Tweet Hunter, Taplio (sold $8m) Growing https://t.co/OyNJ8ZUyOh - https://t.co/jS9GQJ5Ps8 - https://t.co/EFUcKeBbpU - https://t.co/JkVOl1O0S1 - https://t.co/KG9PgxJabg Sharing weekly tips about growth: https://t.co/ereQodN3Ov

avatar for Tibo
Tibo
Mon Dec 01 09:09:49
Errrr @vercel why is my bill skyrocketing?

I used to use 380 Gb-hours function duration for the price of $0 (included in Pro).

Now I'm paying $11 already in a matter of days, for 60 Gb-hours.

Did this just get 600% more expensive?

Errrr @vercel why is my bill skyrocketing? I used to use 380 Gb-hours function duration for the price of $0 (included in Pro). Now I'm paying $11 already in a matter of days, for 60 Gb-hours. Did this just get 600% more expensive?

Founder @Tailscan for Tailwind CSS Co-Founder @Lexboostai + many random side projects: https://t.co/TPk3m9LhZa, https://t.co/uW4shohLZq, https://t.co/BFujf7veHX

avatar for Erwin
Erwin
Mon Dec 01 09:06:57
this is an intriguing research blog, coming from the same folks as “RL tunes small subnets” paper.

i find it useful because it discusses how SGD-RLVR combo has 0.01% params get updated, in comparison to adamW with upto 90% of params getting updated. it implies we can discard adamW over SGD for RLVR at least.

i have multiple questions in my head which I would try to answer with my own experimentation next,
> where exactly is the boundary between the “SGD-safe subspace” and the “adamW-needed full space” in post-training?
> can we systematically turn RLVR/SGD’s tiny active subnetwork into a reusable, modular adapter stack for multi-domain training?
> when you force post-training to operate in these tiny, structured subspaces (found by RLVR/SGD or designed as LoRA), how do the global properties of the model change vs full-space adamW RLHF/RLVR?

i need to think of clean experiments on small scale for this and would update this thread itself.

it may not have much yield beyond RLVR but it’s an under-explored question still.

this is an intriguing research blog, coming from the same folks as “RL tunes small subnets” paper. i find it useful because it discusses how SGD-RLVR combo has 0.01% params get updated, in comparison to adamW with upto 90% of params getting updated. it implies we can discard adamW over SGD for RLVR at least. i have multiple questions in my head which I would try to answer with my own experimentation next, > where exactly is the boundary between the “SGD-safe subspace” and the “adamW-needed full space” in post-training? > can we systematically turn RLVR/SGD’s tiny active subnetwork into a reusable, modular adapter stack for multi-domain training? > when you force post-training to operate in these tiny, structured subspaces (found by RLVR/SGD or designed as LoRA), how do the global properties of the model change vs full-space adamW RLHF/RLVR? i need to think of clean experiments on small scale for this and would update this thread itself. it may not have much yield beyond RLVR but it’s an under-explored question still.

making models learn • eXperiments lab • memes and training lores

avatar for tokenbender
tokenbender
Mon Dec 01 09:05:28
  • Previous
  • 1
  • More pages
  • 1913
  • 1914
  • 1915
  • More pages
  • 5634
  • Next