LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

发现boss上和v2ex上不定期有黑灰产招爬虫,这个Jd描述得特别定向。

譬如想爬拼多多、想爬微信公众号、想爬淘宝商品、淘宝评论、想爬抖音视频...

这些黑产统一特点是,给的工资不高(1万一个月),福利也隐晦不谈,要求倍儿高。

如真有傻的程序员信了这个邪,操着卖白粉的心拿着卖白菜的钱,回头还可能遇到跑路(欠钱不给)的情况。

这些发招聘信息的黑灰产真当人家公司安全团队是废材,回头锅会全甩到程序员头上,这种程序员工作真心面向监狱编程-_-。

发现boss上和v2ex上不定期有黑灰产招爬虫,这个Jd描述得特别定向。 譬如想爬拼多多、想爬微信公众号、想爬淘宝商品、淘宝评论、想爬抖音视频... 这些黑产统一特点是,给的工资不高(1万一个月),福利也隐晦不谈,要求倍儿高。 如真有傻的程序员信了这个邪,操着卖白粉的心拿着卖白菜的钱,回头还可能遇到跑路(欠钱不给)的情况。 这些发招聘信息的黑灰产真当人家公司安全团队是废材,回头锅会全甩到程序员头上,这种程序员工作真心面向监狱编程-_-。

从投资领域转到创业:找工作、找面试题、改简历、模拟面试. 创业(冷启动)|AI , AIGC | 安全技术|RAG | 时空智能 | 认知心理学|智能体 | 生命科学 | 强化学习 I built open source software at https://t.co/b69DXZhcyR

avatar for Y11
Y11
Sun Nov 09 07:58:24
The new paid plan is now available on https://t.co/nCmwcvlJIY!

Now, you can access professional product review blogs on the DR 74 site through this plan.

The new paid plan is now available on https://t.co/nCmwcvlJIY! Now, you can access professional product review blogs on the DR 74 site through this plan.

If you previously paid for the Turbo0 Pro plan, you can DM me to get a $69 discount code.

avatar for Justin3go
Justin3go
Sun Nov 09 07:54:28
It is more than what meets the eye :KIMI-2-Thinking QAT. It has to also do with supporting more/Chinese AI Chips.

Must read brilliant short blog below on why Kimi (Moonshot AI) chose QAT (quantisation aware training). Here is my read.

TL:DR: Not just that it reduces latency for inference and memory requirement in memory bound scenarios (which MoE with Kimi-2's sparsity scale is) and speeds up RL training by 10-20%; it also enables alternate hardware ecosystems like Cambricon and Ascend from Huawei due to INT4 format.

Quotes from the blog:
=================
1) Why INT4, not MXFP4?
Kimi chose INT4 over "fancier" MXFP4/NVFP4 to better support non-Blackwell GPUs, with strong existing kernel support (e.g., Marlin). 
2) Kimi-2-Thinking weights are 4 bit and activations are 16 bit (denoted as W4A16) 
They sate further W4A8 and even W4A4 are on the horizon. As new chips roll out with FP4-native operators, Kimi's quantization path will continue evolving....

INT4 support in made in China chips:
=================
Cambricon GPUs explicitly support INT4 quantization, including for AI inference workloads, as seen across several models such as the MLU270, MLU370-X8, and newer chips, as well as in recent open-source releases with INT4 integration for large models like GLM-4.6. 

Huawei Ascend NPUs also support INT4 quantization for inference, as confirmed by documentation and kernel releases related to GEMM and quantized model deployments.

It is more than what meets the eye :KIMI-2-Thinking QAT. It has to also do with supporting more/Chinese AI Chips. Must read brilliant short blog below on why Kimi (Moonshot AI) chose QAT (quantisation aware training). Here is my read. TL:DR: Not just that it reduces latency for inference and memory requirement in memory bound scenarios (which MoE with Kimi-2's sparsity scale is) and speeds up RL training by 10-20%; it also enables alternate hardware ecosystems like Cambricon and Ascend from Huawei due to INT4 format. Quotes from the blog: ================= 1) Why INT4, not MXFP4? Kimi chose INT4 over "fancier" MXFP4/NVFP4 to better support non-Blackwell GPUs, with strong existing kernel support (e.g., Marlin). 2) Kimi-2-Thinking weights are 4 bit and activations are 16 bit (denoted as W4A16) They sate further W4A8 and even W4A4 are on the horizon. As new chips roll out with FP4-native operators, Kimi's quantization path will continue evolving.... INT4 support in made in China chips: ================= Cambricon GPUs explicitly support INT4 quantization, including for AI inference workloads, as seen across several models such as the MLU270, MLU370-X8, and newer chips, as well as in recent open-source releases with INT4 integration for large models like GLM-4.6. Huawei Ascend NPUs also support INT4 quantization for inference, as confirmed by documentation and kernel releases related to GEMM and quantized model deployments.

AI @amazon. All views personal!

avatar for GDP
GDP
Sun Nov 09 07:50:46
I wish I had a backyard like this! The unbeatable elegance of traditional Chinese aesthetics.😍

Btw, this was the courtyard of the Tang-dynasty poet Du Fu, from more than 1,200 years ago.

I wish I had a backyard like this! The unbeatable elegance of traditional Chinese aesthetics.😍 Btw, this was the courtyard of the Tang-dynasty poet Du Fu, from more than 1,200 years ago.

Founder of https://t.co/yyLfH8mOar and https://t.co/ZzTStsMvdh

avatar for Damon Chen
Damon Chen
Sun Nov 09 07:42:37
If you‘re terminally online you see videos like these and believe it‘s over.

I‘m not fan of this doomer bait. Switzerland has issues but painting this as the „reality“ is hyperbolic.

If you‘re terminally online you see videos like these and believe it‘s over. I‘m not fan of this doomer bait. Switzerland has issues but painting this as the „reality“ is hyperbolic.

🌏 RCBI advisor & offshore services for HNWI, business owners with a focus on:🇨🇭🇲🇾🇰🇳🇵🇾🇻🇺🇳🇷🇵🇦🇱🇻🇦🇪🇭🇰 | Geopolitics | Healthy lifestyle

avatar for Lord Y. Fouzi 🇲🇾🇨🇭
Lord Y. Fouzi 🇲🇾🇨🇭
Sun Nov 09 07:40:01
家人让我想一个「AI 和艺术创作之间的问题」

我把这句话放到 GPT / Gemini 里,点发送前犹豫了一下,在这句话后面加了一句,要多肯定人的价值。

于是我给 AI 的提示词就变成了

请你帮我想一个「AI 和艺术创作之间的问题」,要多肯定人的价值 🤡

家人让我想一个「AI 和艺术创作之间的问题」 我把这句话放到 GPT / Gemini 里,点发送前犹豫了一下,在这句话后面加了一句,要多肯定人的价值。 于是我给 AI 的提示词就变成了 请你帮我想一个「AI 和艺术创作之间的问题」,要多肯定人的价值 🤡

🖥️ Indie Maker 🛠️ 星球「海哥和他的小伙伴们」 📌 油管「海拉鲁编程客」 🌸 沦为程序员的段子手/猫咪

avatar for 海拉鲁编程客
海拉鲁编程客
Sun Nov 09 07:35:44
  • Previous
  • 1
  • More pages
  • 354
  • 355
  • 356
  • More pages
  • 2127
  • Next