Raw voice notes. No script, no editing. Just whatever I'm thinking about.
1:03
GPT 5.4 and Codex CLI sub-agents
0:00
So I'm using this GPT 5.4 in Codex CLI and I really like that now — given you can also launch sub-agents inside Codex CLI, how powerful this has become. So this morning I had a side project code which was quite bloated I would say, and monolithic as well. I was able to see how well it has organized and restructured the code. So now I'm trying it for some another project and yeah, this has been working pretty well. So I'm very impressed with GPT 5.4.
1:47
AI agents in tmux
0:00
For the last couple of days, I've been doing an experiment — using my AI agents inside tmux. So tmux is basically this terminal multiplexer which splits a given terminal window into different panes. It's really amazing to see how you can leverage the true power of these different AI agents. You can literally launch an individual AI agent in one of those panes and work on a single problem but different parts of it. So let's say one agent takes care of thinking about the UX part, another one thinks about the UI part, and another one can think about something else in the product development pipeline. That's one of the ways — there are so many ways you can do it. As just a UX designer and UI designer, I can also do a git worktree and assign each of these different agents to an individual worktree. Then ask them to produce different UI solutions for the same problem, and share the results with me. I feel like this is a truly powerful way of using different AI agents working on the same problem.
1:42
Why we're drawn to LLMs
0:00
Why don't we really like using LLMs? I was discussing this with one of my colleagues, and we were sharing our points of view, and then post-conversation, I was reflecting back on our conversation, and I had this analogy in my mind. Let's imagine you're talking to an expert about, let's say, the design field, and in the mid-conversation, you basically talk to them about something else. It's going to take time for the person to basically switch context, given they know about the other field as well. But this is where I guess LLMs are very quick to change the context. But I'm not sure when they change the context, how reliable the information they provide about the different field, which maybe I as a human being don't know much about. In conclusion, I feel like these systems thrive because they can produce volume of information at speed which our brain wasn't really there. I feel like this is one of the impressive part about these systems.