Mobot 101: Test Environment, Configuration, and Regression Testing
In this post, we'll cover some of the frequently asked questions about Mobot's test environment, configuration, and regression testing process. If you have a question that isn't specifically covered in this post, please reach out to our team by booking a demo.
Whenever we have a new build, how do we share it with you guys?
Builds are distributed to Mobot similarly to how you would work with other external testers.
Mobot accepts builds distributed through TestFlight, FireBase, Google Play Beta, AppCenter, IPA/APK files distributed from BitRise, and more.
How are UI changes detected during testing?
When test plans are set up for the first time, we capture a baseline screenshot that establishes what the UI should be. When we run tests of a newer version of a mobile build, we can compare screenshots vs the prior build to see if there are any changes in the UI.
From an image assessment perspective, we are using a variety of techniques, including:
-Pixel by pixel mapping on screen regions
-Identify text on the screen via OCR, allowing us to read/parse text off the screen. We can use very strict text matching, or use regular expressions to account for logic that needs to be detected on the screen.
-Additional technology that allows us to annotate out/emphasize specific regions of the screen for content detection for validation.
After the robots execute the automation, every test report is reviewed by the Mobot team to review, provide context for the result, and attach any relevant logs to the report.
Do you capture video of screens during testing?
By default, Mobot does not capture video because the UI is designed for a screenshot-by-screenshot, frame-by-frame review. Every test action is captured by a screenshot, documented according to timestamps, and appended with logs as needed.
Videos are available upon request.
How will we update the scenarios and cater for specific changes?
If you have an update to a test flow, test case, or an A/B test, you can notify Mobot by recording a simple video or screen recording that shows the changes, and we will update the test for you. We can also use any changes in documentation or process to update test cases.
Many customers provide Figmas or mockups with Mobot to add additional context for changes.
Is there a difference in turnaround times when creating new test scenarios?
Yes, there is a difference in turnaround times when creating new test scenarios. Tests that are already on the platform can be run with same-day turnaround. However, for new test cases or instructions, they need to undergo a review and setup process which can take a few business days. If you have a substantial number of new test cases, our team will work with you to devise a rollout plan. For a few simple changes or additions, the turnaround time is typically a few business days. Once the new tests are set up, they can be queued for regression testing with same or next-day turnaround. We maintain close communication with you throughout this process to meet any deadlines and deliverables.
Do tests run during business hours from Eastern Time?
Yes, our robots and team are based in New York and generally operate during standard Eastern Time business hours. However, we have experience supporting customers in various time zones worldwide, including Australia, India, Europe, and the US West Coast. The time zone differences can sometimes be beneficial as it allows our system to conduct tests at times when your team might not be available.
Can the platform handle testing on different connection speeds?
Yes, the platform can handle testing on different connection speeds. Users are able to specify network requirements for testing, and while very custom setups might require a different engagement or scoping, the platform is capable of supporting varying connection speeds.
Is there pixel comparison for testing?
Yes, the platform does support pixel comparison for testing.
Can you simulate testing scenarios such as airplane mode, intermittent wifi, different locations?
Yes, the platform can simulate testing scenarios such as airplane mode, intermittent Wi-Fi, and varying locations. We understand that end-users may not always have reliable access to Wi-Fi, and we can design complex testing scenarios around this, such as handling image uploads when there's no Wi-Fi, among other scenarios.
If you had an iPad and iPhone next to each other, do you have to train two separate tests?
Yes, if testing an iPhone and an iPad, you might need to train two separate tests. This is because the UI layouts can look different on different devices or orientations. However, this is part of the maintenance that we perform, so it's not an extra overhead for you. The platform is also capable of designing test cases where content is uploaded on one device and then downloaded on another.
How does it handle syncing state to other devices?
Our platform is capable of designing test cases that handle syncing state to other devices. This could involve, for example, uploading a set of images or content on one device and then downloading it on another. We define preconditions and prerequisites for testing, allowing Device A to complete a set of actions before Device B's testing commences.
Do you use emulators or physical devices for testing?
Mobot performs all testing on physical iOS and Android phones and tablets, not on emulators or simulators. We physically execute the testing in our New York city office, interacting with the device the same way a human would. This allows us to cover a wider variety of realistic scenarios.
Do you test native and third-party camera SDKs?
Yes, we do test both native and third-party camera SDKs. As Mobot interacts with the UI like a human, it can handle any UIs, whether they're native camera UIs on the device or custom third-party ones with different filters and settings. We use the real camera on the device for testing, capable of testing both the front and rear camera.
Can you handle multi-tab clicking?
Yes, we can handle multi-touch scenarios. While we're currently working on automating various types of actions on our platform, any custom gestures that cannot be performed by a robot are performed by team members. Our long-term vision is to automate all of the human gestures.
What kind of percentage do you envisage for testing?
At Mobot, we strongly recommend getting as close to 100% test coverage as possible. We understand that some of this test coverage may be covered by other automated tests or manual QA, but we believe this usually needs to be less than 5% of your test coverage. The rest can be handled by Mobot. Many of our customers have between 60% to 95% of their test coverage with Mobot.
How long does it take for you to complete your set of regression tests?
If Mobot is provided with a build in the morning of a business day, we can usually deliver same-day test results. If there are any delays or issues, testing may go into the next day. We are reasonable with our turnaround times and can be responsive to urgent deadlines.
Do you test against TestFlight apps?
Yes, Mobot can test against TestFlight apps.
Is regression testing available?
Yes, regression testing is one of the services that Mobot offers. We can handle regression testing for a wide range of applications, ensuring that new changes or features have not adversely affected existing functionality.