totally forgot about this experiment where i found it was faster and cheaper to do classification via embeddings vs using the fastest/cheapest llm (at the time)
Loading thread detail
Fetching the original tweets from X for a clean reading view.
Hang tight—this usually only takes a few seconds.