Your Trading Agent Can't Actually Trade, and Your
March 6, 2026
TL;DR
- TraderBench tests 13 AI models on adversarial trading tasks. 8 of 13 use fixed, non-adaptive strategies — extended thinking helps knowledge retrieval but has zero impact on actual trading decisions.
- ETH Zurich finds that LLM-generated AGENTS.md context files decrease coding agent success rates by up to 2% and raise inference costs by over 20%. Human-written files offer only marginal gains.
- NVIDIA argues that small language models — not frontier LLMs — are the right fit for most agentic tasks. With GTC 2026 ten days away, the position paper reads as a strategic preview.
- The Commerce Department's draft AI chip export rules are clashing with the White House, creating regulatory uncertainty across the AI supply chain.
Subscribe to read the full issue
Enter your email to get a magic link and continue reading.
Already a subscriber? Sign in