Living with AI

One frustrating thing about working with AI is that it breaks quietly.

A model gets updated. A default changes. Some layer on top of the model changes behavior. If you use tools like OpenClaw or other wrappers, what used to work can suddenly stop working — and you don’t always know why.

That uncertainty is one of the hidden costs of building with AI.

It’s also why I now separate my recurring tasks into two buckets.

1) Tasks that should not change

If a task is repetitive and predictable, I don’t want AI improvising every time. I want a cron job or a standard script.

A good example is checking a portfolio.

What I actually want is simple:

That should be a fixed workflow.

AI can help me write the script, improve the logic, or clean up the formatting. But once the task is defined, I want the system itself to stay repetitive and predictable.

Then the outcome becomes clear:

That’s the kind of reliability I want for recurring tasks.

2) Tasks that still need AI reasoning

Some recurring tasks still need judgment — reading the web, summarizing sentiment, spotting changes, and so on.

For those, I try to force as much structure as possible:

That last one matters a lot. Silent failure is worse than loud failure.

The trade-off

My current lesson: use AI where reasoning helps, but use fixed code where consistency matters.

The more important the task, the less “magic” I want in the workflow.

#Ai #Agents #Workflows #Trade-Offs #Reliability