This is a nice set of lessons, from Ken @ http://gettruss.io. It covers basic usage of the OpenAI Chat Completions API. Some of them mirror my own experiences building basic RAG systems over the last few months.
Lessons after a half-billion GPT tokens
https://kenkantzer.com/lessons-after-a-half-billion-gpt-tokens/
The details about keeping the calling functions basic (no langchain) and only asking for 10 items at a time really stood out.
If you’re over-optimizing or over-anchoring to specific tech or behaviors when using LLMs right now, you’re only coding yourself into a corner. The pace of the innovation-then-optimization loop is very unusual in this space.