Thinking Machines introduced Tinker, a new training API that gives researchers and builders granular control over language-model fine-tuning while the service manages the heavy lifting of distributed training and infrastructure. The company positions Tinker as a “researcher-first” product: you control the algorithms, datasets, schedules, and evaluation routines; Tinker orchestrates GPUs, checkpoints, and fault-tolerant runs. Announcement and product pages: thinkingmachines.ai/tinker and Announcing Tinker.
The launch reflects a broader push to make serious customization of open-weight models accessible without standing up bespoke clusters. Tinker exposes low-level knobs (e.g., optimizer settings, LR schedules, PEFT/full-fine-tune choices) and provides a cookbook of reference projects for common goals like tool-use, math reasoning, preference learning (RLHF), and supervised chat tuning. For teams that want to reproduce research or run A/B baselines, the API emphasizes determinism, logging, and repeatable pipelines, while keeping costs predictable via managed scheduling and checkpoint reuse.
Why it matters: As organizations move beyond off-the-shelf LLMs, reliable fine-tuning with transparent controls becomes a competitive moat. Tinker targets that gap: it lowers ops overhead while preserving the experimental flexibility researchers need, accelerating the path from prototype to production. Learn more on the product site: Tinker by Thinking Machines.