I am using the Dyad builder with M2.7. My codebase context token size is 140k. Max token size for M2.7 is 204.8k. I have capped the max output token size to 32k, but for today, the whole damn day I get dropped responses in the middle of the output stream, maybe after 2-5k tokens output while the max output is 131k tokens. It freaks me out. Dyad only supports OpenAI API. Could this be a deal? I know that 140k context is too much for 205k model but on the launch it didn’t behave like that. My usage is barely 10% of 5 hours limit window. I am thinking to move to another model, can’t work like that. Is there any way to fix it? Or what is good context window for M2.7? P.S. Is there any plan to roll out a 1m context model? P.P.S. For god’s sake, please use some anti-bot plugin in this channel