Skip to main content

Script testing

Testing is an important part of the bot development process. It allows controlling the project efficiency and monitoring which changes to the script may affect the way it works.

Manual testing

In its simplest form, testing can be carried out manually: you can send messages to the bot and verify that its expected and actual behavior are the same.

To do manual testing, deploy the bot in a separate channel unavailable to end users. We strongly recommend using a channel of the same type as the main one, so the testing environment resembles production as closely as possible.

tip
During active development or to verify small changes to the script, you can also use the test widget. It allows testing the bot directly from the JAICP interface.

Automated tests

When a bot script has a complex structure, even minor changes can have a major effect on its way of operation. The bigger the script becomes, the more resource-heavy and expensive manual testing becomes.

JAICP allows writing automated tests for scripts, which describe the required bot behavior in a declarative style. If you cover the whole script with automated tests and support them together with the main code base, you will be certain that the bot behaves exactly as expected.

Tests are created in the test project directory and are executed automatically before each bot deployment. You have control over the list of tests which get executed (via the chatbot.yaml configuration file) as well as over the mode of execution. Specifically, in some cases you can skip running tests altogether.

Tests are written in XML and use a strictly defined set of tags. Learn more about tags used in tests.

Bot quality evaluation on dialog sets

Bot quality evaluation is a tool that allows you to test a bot on dialog sets. You can upload a file with a dialog set and run tests in the JAICP interface.

Each dialog contains user requests and expected bot reactions. During a test, the tool compares the received bot reactions with the expected ones.

You can view test history and download detailed reports with the results.

Comparison with automated tests

Bot quality evaluationAutomated tests
File formatXLS, XLSX, CSVXML
Bot deploymentTests run in the background and do not affect deployment.Different modes are available: tests can block the deployment or can run in the background.
Checking bot reactionsYou can check text responses and bot states.You can check any reaction types. Mock objects for HTTP requests and integrations are supported.
VersioningYou can upload new dialog set versions in the interface.Versions are changed together with the project code.
Display of resultsThe interface displays the test history and the percentage of successful checks for each test. You can also download detailed reports.The interface displays notifications about test results.