An Unfinished Thought About Agents
Sometimes I wonder if chaining LLMs into agents is actually just a fancier way of simulating a broken human workflow. Like, instead of thinking clearly, you spin up a bunch of fake employees who misunderstand each other just slightly less than usual.
I get the promise — autonomous behavior, long-term reasoning, etc. But every time I try it, it just feels like duct taping GPT together and hoping it doesn't drift.
Maybe we're early. Or maybe this whole "agent" thing is just prompt engineering wearing a suit.