I totally buy that AI has made you more productive. And I buy that if other lawyers were more agentic, they could also get more productivity gains from AI. But I think you're making my point for me. The reason it takes lawyers all this schlep and agency to integrate these models is because they're not actually AGI! A human on a server wouldn't need some special Westlaw/Lexis connection - she could just directly use the software. A human on a server would improve directly from her own experience with the job, and pretty soon be autonomously generating a lot of productivity. She wouldn't need you to put off your other deadlines in order to micromanage the increments of her work, or turn what you're observing into better prompts and few shot examples. While I don't know the actual workflow for lawyers (and I'm curious to learn more), I've sunk a lot of time in trying to get these models to be useful for my work, and on tasks that seemed like they should be dead center in their text-in-text-out repertoire (identifying good clips, writing copy, finding guests, etc). And this experience has made me quite skeptical that there's a bunch of net productivity gains currently available from building autonomous agentic loops. Chatting with these models has definitely made me more productive (but in the way that a better Google search would also make me more productive). The argument I was trying to make in the post was not that the models aren't useful. I'm saying that the trillions of dollars in revenue we'd expect from actual AGI are not being held up because people aren't willing to try the technology. Rather, that it's just genuinely super schleppy and difficult to get human-like labor out of these models.
Loading thread detail
Fetching the original tweets from X for a clean reading view.
Hang tight—this usually only takes a few seconds.
