Sluggish architecture migration is a common problem for every AI, and it's particularly pronounced with GPT 5.4, with all kinds of messy compatibility layers. Additionally, every time Codex makes a change, it adds tests, and the test files exceed the implementation! While it looks stable, it's really very cumbersome. The result is that refactoring is extremely, extremely slow! And every time I have to say "continue", I've already been sitting at my computer for a full 2 days, and Codex is still working on the same thing! The image below is Claude's sharp critique of Codex:
Using Codex also has new pain points:
Sluggish architecture migration is a common problem for every AI, and it's particularly pronounced with GPT 5.4, with all kinds of messy compatibility layers.
Additionally, every time Codex makes a change, it adds tests, and the test files exceed the implementation! While it looks stable, it's really very cumbersome.
The result is that refactoring is extremely, extremely slow! And every time I have to say "continue", I've already been sitting at my computer for a full 2 days, and Codex is still working on the same thing!
The image below is Claude's sharp critique of Codex: