Using AI to program seems to be teaching us something about software architecture philosophy. About which architectures are more suitable for optimizing the blend of risk mitigation and development speed.
I don’t have a thesis yet but I can see signs of it.
Risk mitigation drives us to encapsulate and control the outputs of risky code. It allows lowering standards for low risk code.
The pace of development with AI is correlated with the complexity of the codebase and task at hand. If you weave something horizontally throughout the code, the scope of the knowledge you need to hold in your head (and in context) increases. Smaller and more modular systems shine.
Small composable units in a resilient execution system?
Sounds a lot like Unix and Erlang.