Last month brought significant advancements to Refact.ai Agent:
Bonus part: highlights from DevWorld in Amsterdam and MWC in Barcelona, where our team participated.
Let’s explore the details!
Anthropic’s best coding model, optimized for real-world tasks, autonomy, and precision, is now available for use by an AI Agent.
The new LLM excels in executing tasks step-by-step and delivering production-ready code. As Anthropic claims: “Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development.”
To work with it, just select Claude 3.7 Sonnet in your Refact.ai extension and try it in your environment!
Refact.ai now integrates with MCP (Model Control Protocol) servers, enabling AI Agent to autonomously interact with external tools and services.
Your AI Agent can interact with Jira, Figma, Supabase, and 1900+ other data sources and tools — automating even more of your workflow. For the full list, check the official MCP server repository on GitHub or mcp.so.
To set up, go to Setting —> Setup Agent Integration —> MCP Server.
MCP has enriched the set of tools available to AI Agent, which you can read more about in Refact.ai Docs.
Thinking Model: Give your AI Agent a boost with the o3-mini reasoning model for planning. To activate, click the settings button above the chat bar.
Rollback Feature: Undo AI-made changes easily. Refact.ai Agent now stores checkpoints and can revert file changes — ask for this in chat or enable Changes Rollback for auto snapshots in chat settings.
Faster and More Accurate Patch Creation: Previously, patching was a two-step process involving 2 models. Now, the model applies changes directly to the file — simpler and with a much higher success rate.
Model Updates: Added Gemini, Grok (XAI), and OpenAI o1; new CPU embeddings model — thenlper/gte-base/cpu; deprecated models removed.
Offline Mode: Server now works with preloaded models without a Hugging Face connection and supports manual weight uploads.
Enterprise Only: Mistral/24b/instruct/vllm model & qwen2.5/7b/14b/32b/instruct/vllm models with tool use capabilities.
As AI Agent grows more powerful, we’re introducing a new request system to ensure optimal usage:
Note: One “request” = one task you manually assign. This means AI Agent will handle your multi-step tasks end-to-end within a single request, without extra charges until completion.
You’ll also see your remaining daily requests when approaching the limit.
We aimed to design this system user-friendly: our mission to build the future of programming and make it accessible to all. So, our autonomous AI Agent is available to everyone, with the Pro plan still priced lower than any alternative solution. And if you need more, scaling up is easy.
For any questions, welcome to our Discord!
Over the past two weeks, our team engaged with developers globally at the DevWorld conference in Amsterdam and MWC 2025 in Barcelona. At both events, we invited attendees to collaborate with our AI Agent in building a Tetris game from scratch, encouraging them to propose new features that we then assigned to the Agent to implement. Visitors were impressed by AI Agent’s ability to autonomously handle tasks end-to-end, completing each feature without human intervention.
Complementing these demonstrations, our Head of Product, Nick Frolov, drew a packed audience for his practical presentation on building AI development Agents.
Update your VSCode or JetBrains plugin and meet the enhanced Refact.ai experience. AI Agent is evolving fast — adapting to your workflow, understanding your codebase, and executing tasks like your digital twin.
Experience the vibe coding, today!