EXPLYT TEAM
08.04.2026
7 MINUTES
Debugging, running a project, and refactoring are core IDE capabilities that developers rely on every day. But until Explyt 5.8, even the most advanced AI coding agents could barely work with these essentials.
With Explyt 5.8, we’ve made the agent much more deeply integrated with the IDE: it can now help with debugging, run configurations, and safe refactoring. On top of that, we’ve introduced several important workflow improvements and updated our pricing model.

We’re changing the pricing model for personal users: instead of abstract credits, we now use clear per-minute billing based on the time the model spends processing a request.
This refers specifically to the time the model takes to process the request and return a response, not the time you spend working in the plugin or the duration of the entire session. Model processing time roughly correlates to your working time at a ratio of 1:16. In other words, 5 minutes of model processing time is about 80 minutes of your work with agents in the IDE.
By our estimate, this per-minute pricing is about two times cheaper than subscribing directly to any single provider, such as Claude Code or Codex, and about ten times cheaper than accessing these models via API keys. At the same time, it gives you access to top models from all providers, rather than just one, as with a direct subscription.
The personal subscription still includes access to leading proprietary models from OpenAI, Anthropic, and Google Gemini, as well as open-source models such as GLM, MiniMax, Qwen, and Kimi.
The agent can now control the IDE debugger: set breakpoints, run code in debug mode, and inspect variable state. This enables more advanced debugging techniques, including step-by-step hypothesis testing, adding temporary logs and then removing them, and running tests under the debugger to analyze specific failure scenarios.
This is especially useful when you run into a complex bug that is difficult to diagnose from logs and code analysis alone, and you need to look inside the program’s execution to understand what is happening.

The agent can now find and run project configurations, including tests, builds, and applications, directly through the IDE. The Get Configuration tool allows the agent to retrieve a list of available configurations for a file, package, or the entire project, including Gradle and Maven tasks as well as custom configurations. The Run Configuration tool launches the selected configuration and returns the result to the agent, including console output, test results, and compilation errors.
This is more efficient than building or running a project through the terminal, which is what most AI tools rely on, because it reduces token usage by providing the model with structured run results and uses the SDK already configured in the project. It is currently supported in Java-based IDEs such as IntelliJ IDEA and Android Studio, as well as in PyCharm, with support for other IDEs coming in future releases.
This is useful when you ask the agent to write code and want it to run tests on its own to verify everything works, or when you need to build the project and analyze build errors without switching to the terminal.
The agent now has a tool for renaming symbols, including classes, methods, fields, variables, and parameters. The good old Shift+F6 refactoring action is now available to the agent too.
Unlike ordinary text search and replace used by CLI-based agents, this tool uses the IDE’s native refactoring engine. Other tools often rely on text-based replacement, which can easily break code and then waste your time and tokens fixing it. Our refactoring tool correctly updates all declarations and usages of a symbol, including getters, setters, test classes, and imports, without breaking the code.
Related symbols are renamed automatically as well. For example, when renaming a field, the agent will also update the corresponding getters and setters.
This is especially useful when you ask the agent to perform a large-scale refactoring that includes renaming entities across the project, when you want to standardize method or variable names, or when you are working to reduce technical debt.
You can now queue messages in the chat without waiting for the agent’s previous response to finish. If you think of a clarification, a new instruction, or a follow-up while the current response is still being generated, you can send it immediately and it will not be lost. The agent will process queued messages one by one. Previously, in these situations, you had to wait for the current response to finish or manually re-enter your message, which made the conversation feel less natural.
While a message is being processed, any new message sent with the Send button is added to the queue. You can also send a message directly to the chat, bypassing the queue, by pressing Ctrl+Enter. The button to the left of the Send button opens the queue. You can reorder queue items by dragging them. When you hover over a queue item, additional controls appear: you can set how many times the message should repeat, force-send it to the chat by stopping the current processing, or remove it from the queue.
This is useful when you are having a long iterative conversation with the agent, formulating clarifications as you read the response, or when you want to add another step while the previous request is still running. Repeats are especially helpful when working with a weaker or less proactive model: you can send a message like “Keep going until you’re confident in the result,” set it to repeat 10 times, and step away for a coffee.

For models connected through OpenAI-compatible mode, we’ve added a field that allows you to specify the context size manually. Previously, the plugin could not always determine the correct context limit for such models, which sometimes led to incorrect values being shown in the interface. Now you can set the exact context size yourself, and the plugin will correctly manage the size of outgoing requests.
This is useful when you use OpenAI-compatible models such as Qwen or DeepSeek with a non-standard context size and want the plugin to calculate limits correctly when sending requests and compressing chat history.
The agent in Rider now has two new tools for working with .NET code.
Class Search: the agent can search for .NET types by name across your project, libraries, and NuGet packages. The search supports filtering by scope, such as project only, libraries only, or all, as well as by namespace prefix.
Decompilation: if the source code of a class is not available, for example, for a type from a NuGet package or a system assembly, the agent can decompile it and inspect its contents directly in the IDE, without requiring you to locate the source repository or download the package separately.
This is useful when you ask the agent to explore a third-party library, understand how a type from a NuGet package works, or find the right class in a large solution with many dependencies.
The plugin has been updated for compatibility with JetBrains IDE 2026.1. If you have already upgraded to the new IDE version, Explyt will continue to work without any changes.
Install the update now and see how Explyt transforms the way you build, test, and fix code.


