Ever wondered what happens when an AI agency runs itself without human oversight? Turns out… not much good.
I just read an interesting article about a Carnegie Mellon study where they staffed a fictitious ‘agency’ entirely with AI agents. It didn’t go well, but it got me thinking.
* Why would we ever want a fully automated system with no human oversight at all? That seems like a quixotic quest, as decades of learning on automation have proved.
* Why would we expect AI agents to behave like humans, using the same tools and processes? The benefit of agent-to-agent communication is that it doesn’t have to be done that way.
It’s an interesting article, and I hope it fuels your imagination about how we might tackle these challenges.
I don’t think we need clever answers; we need to think deeply about the problems and approach solutions with simplicity.
By the way, did you ever wonder why androids in Star Wars vocalise to each other? It’s wildly inefficient and error-prone in noisy environments. Surely wireless data transfer would be better? But of course, that wouldn’t make for great filmmaking or give R2-D2 its charm…
Leave a Reply