Mobot 101: Results & Metrics
In this post, we'll cover some of the frequently asked questions about Mobot's results and metrics process.
In this post, we'll cover some of the frequently asked questions about Mobot's results and metrics process. If you have a question that isn't specifically covered in this post, please reach out to our team by booking a demo.
How does the robot actually flag bugs?
The robot flags bugs by comparing the actual test run to a baseline test plan. It captures screenshots during the test and if it deviates from the expected outcome, it flags the issue. If the robot is blocked or the test case is unfulfillable, it flags it and moves to the next test or test case.
What do the reports look like?
The reports present a screenshot-by-screenshot replay of each test execution, comparing what was expected to what was actually encountered by the robot. They also align with any ADB logs or other test steps context provided during the test plan setup. Metadata about the test run (device, OS, build ID) is also included, along with a brief explanation of any flagged issues.
Do you provide access to the reports?
Yes, access to the reports is provided to your entire engineering team. We don't price according to licenses, so everyone in your company can access the test reports run by Mobot.
What is the format of the results?
The test results are exportable as a JSON object, which includes the outcome, timestamp, and name of the test step. We're considering other formats for future integrations.
Can the level of logging be available on iOS with a build as well?
Yes, the level of logging can be available on iOS with a build upon request. While Apple makes it difficult to provide these logs on every test run, we can provide iOS device console logs and network traces when needed.
Do you have a way of tracking metrics for testing?
Yes, we have a way of tracking metrics for testing. As we run the same regression tests with each test cycle, we provide a dashboard that allows comparison of historical test runs with the most recent ones across different devices and test plans. This data can also be exported from our dashboard.
What are the metrics used to measure performance?
Mobot is primarily used for functional, integration, and regression testing, not specifically for performance testing. We do provide timestamps which can be used as a point of reference for performance. If there are perceptible issues like lag, loading issues, or infinite loading spinners, these would get flagged during the testing process. For millisecond-level performance metric collection, we recommend using Mobot as a complementary tool in addition to other application performance management (APM) tools.