LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @darshil: having tried both @superpower and @rythmhealth, there is a clear winner

one took over a month to create an action plan the ot…

RT @darshil: having tried both @superpower and @rythmhealth, there is a clear winner one took over a month to create an action plan the ot…

making everything for sale

avatar for Joshua Voydik
Joshua Voydik
Thu Nov 06 21:48:42
RT @a16z: Mark Zuckerberg says progress comes from inventing tools that let us observe the world in a new way, from the microscope to AI mo…

RT @a16z: Mark Zuckerberg says progress comes from inventing tools that let us observe the world in a new way, from the microscope to AI mo…

avatar for benahorowitz.eth
benahorowitz.eth
Thu Nov 06 21:45:39
Thread

Thread

the distribution is anything but normal

avatar for snwy
snwy
Thu Nov 06 21:38:49
RT @doodlestein: This is just the beginning. This kind of stuff will find huge bipartisan support. Companies that replace human employees w…

RT @doodlestein: This is just the beginning. This kind of stuff will find huge bipartisan support. Companies that replace human employees w…

Former Quant Investor, now building @lumera (formerly called Pastel Network) | My Open Source Projects: https://t.co/9qbOCDlaqM

avatar for Jeffrey Emanuel
Jeffrey Emanuel
Thu Nov 06 21:34:20
smh Charles the AI just got delayed even further.

smh Charles the AI just got delayed even further.

gpus and tractors Neural networks from Scratch book: https://t.co/hyMkWyUP7R https://t.co/8WGZRkUGsn

avatar for Harrison Kinsley
Harrison Kinsley
Thu Nov 06 21:33:52
If you're using Colab and you feel like training your model on GPU is slow, switch to the TPU runtime and tune the "steps_per_execution" parameter in model.compile() (higher = more work being done on device before moving back to host RAM)

Can often see a 4-5x speedup.

If you're using Colab and you feel like training your model on GPU is slow, switch to the TPU runtime and tune the "steps_per_execution" parameter in model.compile() (higher = more work being done on device before moving back to host RAM) Can often see a 4-5x speedup.

Co-founder @ndea. Co-founder @arcprize. Creator of Keras and ARC-AGI. Author of 'Deep Learning with Python'.

avatar for François Chollet
François Chollet
Thu Nov 06 21:33:20
  • Previous
  • 1
  • More pages
  • 663
  • 664
  • 665
  • More pages
  • 2127
  • Next