How It's Tested | Ep. #2, Challenges in Mobile Testing with Daniel Knott
Learn more about Daniel Knott.
Eden Full-Goh: Hi, Daniel. Thank you so much for joining us on the How It's Tested podcast.
Daniel Knott: Hi, Eden. Thanks for inviting me, thanks for having me.
Eden: Yeah. It's always really exciting to have an opportunity to talk to a testing expert like you. I know that you've been working in mobile software testing for a number of years now, you've written a couple of books. I've seen you on the conference circuit, speaking about testing. I've seen your whitepapers online and the impressive work that you've done. But just to give our audience here an understanding of your background and where you're from, it would be great to just hear a little bit about how you got into software testing for your career and some of the experiences that you've had so far.
Daniel's Journey Into Software Testing
Daniel: No problem. Yes, I'm working in the software testing industry, I think since 2008. Like many others, I fell into software testing because software testing is not really something that you can learn at a university. Sometimes you have maybe one class that you can take on quality engineering or software engineering with a focus to quality, but I started my testing career at IBM back then in Germany. Also, I'm based in Germany, northern Germany, near Hamburg.
I got a chance to do work as a working student at IBM by doing my computer science university degree, and basically back then they had no idea what to do with me. So I started, the very first day I had very little experience in programming, even though I got accepted as a working student, so they said like, "Hey, we have an application here."It was some storage system, heavy, FAT client on a desktop system, and they say, "Yeah, how about you test this?"
I was like, "Okay. I have no idea, let's see what I can do." And I liked it actually, I like to explore new things and to find potential bugs in an application, and even back then I was already doing exploratory testing and I wasn't even aware of that. So it was just exploring the application and was documenting my findings, and talking to developers. This is something I really enjoyed, basically.
I went back to university, of course. I was in a lucky position, I had a professor who was into quality and testing and he recommended me some books back then and to help me in getting more insights into software testing because I told him, like, "Hey, this is really something I like and I would like to do." And I finally wrote my bachelor thesis on that topic, I think it was web based applications, sorry. I was testing and comparing different tools back then, how to test in that application. Since I was so hooked, I started my career in software testing, said, "Okay. This is something I would like to do."
I was lucky to get my first job in a company where they hired me as a junior software tester, and I finally met like minded people and it was great. I had the chance to exchange with people, I was able finally to learn from professional people, how to do software testing right. It taught me the techniques and everything that is required in order to become a software testing professional, so this was basically my start into that role.
Then a couple of years later, I got asked to start in the first mobile team, I think it was 2011, early 2011, something. They asked me, like, "Hey, would you like to go into mobile testing?" I said, "I have no idea about it, let's go ahead." It was basically back then, it's now 12 years ago, more than 12 years, it was greenfield. There was nothing from a tool perspective. Yeah, the app stores were just around the corner, companies tried to bring an app out on iOS and Android. Also back then there was Blackberry and Windows Phone I had to test against. I was completely, again, a newbie rookie on that field, and it felt good.
At the same time, it also was scary because I had to try out lots of tools. The whole tool landscape wasn't really much, so it was trial and error, basically. The good thing is I got this opportunity as well in the company, and they were okay with us exploring, really doing research on the topic. Back then I had the idea of why not writing everything down in a blog? Because I put so much work into that very first pioneer work, I don't call it pioneer work, but for me it was like doing something from scratch. I shared this, I thought, "Okay, let's do the blog and let's see if somebody is interested in this kind of topic."
I never expected people to read it, and it kind of blew up actually because people were really interested in mobile testing back then. I also got the chance, the very first speaking chance to talk at a conference about the topic. Also, I fell into that speaking slot, speaking career, so to speak, because we were supporting the Azure testing days. Maybe some of you know them, it's a big testing conference in Germany and also well known in Europe, and I think in the whole testing world.
My company back then, we were a sponsor, basically, and as a sponsor we were... I don't know what kind of mode we had, plus premier sponsorship because we got a speaking slot, and usually companies who get speaking slots, they use it for sales pitches, showcase what their products have to offer. My boss back then said, "No, no. We don't want to do that. Let's show the audience that we do cool things from a technology point of view, rather than focusing on our product."
They asked me, "Hey, can you do it, to talk about your stuff on mobile testing?""Okay, I don't know. I've never done a talk in front of people. Let's do so." And they said, "Nah, come on. Anyway, it's the vendor track, nobody but only a few people will come around." And it was a sidetrack, it was a smaller room. But since my talk, I think it was something around challenges in mobile testing, if I'm not mistaken, the room was packed. I was the only one on the agenda, on the program talking about mobile, and they had to bring in more chairs, people were sitting on the ground, on the floor.
They were standing outside. It was ridiculous. I was so nervous because I didn't expect that, and then also after the conference all of the people approached me, they would like to know more and ask me things about it. Even back then I felt like a total newbie on mobile testing because it was new to me still. But since then my speaking career also took off, I was invited to other conferences in Europe and meetups. I followed up my blogging career, if it's a career, and then one day I had the idea of let's write a book about it.
Let's put everything that I have experienced in a book because I think books are something really great and powerful, and something that people like to read and follow up on. I told nobody back then, I think it was in 2014. I just started it on my own, I just told my wife, like, "Hey, look. I might write a book." She was like, "Nah, he's crazy. Just another crazy idea that he has, maybe next week he is doing something different."
But I was hooked and I wrote the book for more than a year in my spare time, then I was lucky to get a publisher contract. It was due to the people from the software testing community. I'm saying thank you to Lisa, Lisa Crispin. Some of you might know her and Marco Scatnos, a fellow software tester back then. They basically recommended me to a published and they said, "Yes, let's do it."
And then I was an author at some point. Again, it's like one little, tiny stone leads to the next opportunity, and that was just great. Yeah, now I'm jumping a bit forward into now. Right now I'm working as the head of software testing, or head of product quality engineering is how we call ourselves at MaibornWolff in Germany. We are working as an IT service provider or consultancy, so we help different clients within Germany to develop great products, test them, help them ship their products in terms of front of the users.
Some of the clients are more B2B focused, some of them are B2C focused, so it's a complete set of things that you can think of. Yeah, so this is a brief, a longer introduction of myself but I hope now you get a better picture, a bigger picture. For those of you who know, my blog is Adventures In QA. You can follow up on it, you find all of the stuff there. In case you have questions, always ping me via social media. I'm always happy to help out, to share knowledge.
This is something that I really enjoy. Just later today, I just got asked to... I mean, whenever the podcasts are available, but today is Test Bash Spring and I got asked, like, "Hey, can you jump into another call?" I said, "Okay, let's do so and so." This is also the cool thing that if you have a name in the industry, that you get asked to attend events and that's something which I really enjoy, to exchange and to share the knowledge with others. Yeah, that's cool.
Eden: That's awesome. Yeah, I remember when I was first doing research. I started my company, Mobot, five years ago and I remember at the very beginning of my founder journey, I was looking for any resources I could find about mobile app testing and your book was the only book that was available at the time. This was in like 2018, which was very surprising to me and I think that goes back to why was that conference room so packed at the conference?
People are so starved for mobile specific, mobile first software testing best practices and it is slightly different than the more generic world of software testing or testing a web app or testing a desktop app. It's really cool to see that you've found your specialization or your niche where there's such a big community of testers, of software professionals, IT professionals, that really need more guidance, more resources and they're hungry for these best practices around mobile.
So I'm curious, when you first started your career and you were testing both web and mobile, then you made the decision to specialize in mobile software best practices. What were some of the key differences or key points of specialization that you feel like makes mobile testing different from generic best practices you can read about on the internet about software testing?
Differentiating Mobile Testing From General Software Testing
Daniel: I got hooked, basically, because of the... I mean this is also one of the biggest challenges, is the difference in the hardware. I like this extra point with the extra challenge when it comes to software testing. When we do normal software testing for web applications or desktop applications, it's more stationary. Users tend to sit in front of a big screen and do whatever they would like to do with the product, but with mobile you can go out, wherever you are, you can use the product.
This was something that took me, I really liked the idea. Back then it was also really cool to play around with all the different devices, always have the latest ones available, all the gadgets that are out there, so this was something from a technology point of view which was really interesting to me back then. Also, yeah, it was all these challenges, like how to do automation on a mobile device, keeping all the scenarios in mind like being outside, not being connected to the office Wi-Fi, but really go out in the areas in the countryside or in the city, depending on the user base.
Where are they using your product? The product I was testing back then was a social media app, and we saw it in our tracking database, they were using it in the morning when they were commuting, then during lunch break, and then again on the commuting time. This was also important knowledge for me to focus on that timing so I also took time to test on a train, for example, to be out there with multiple devices and the pen and paper to note down things and doing lots of screenshots and stuff. Then also I was doing something in the office where people were also using the product, so there was this mix of getting out of the office and tackling the different challenges for the different devices.
Eden: That's really interesting, because then I think the introduction of, like you were saying, testing on the train, testing in different Wi-Fi conditions. It almost feels like there needs to be more test case coverage than you would traditionally need to do for a web app, and that sort of ratio of automated to manual testing or simulated to physical testing is different between web and mobile. What do you think in your experience, based on all the clients that you've worked with, what's the right balance of automated versus manual testing for mobile?
Daniel: It really depends on the product that you are going to test. Of course you should always try to do lots of automation, automation on real devices is always the highest aim that you should go for, get as many devices in the company or use cloud devices or stuff like that to test on the end consumer devices because that's where the magic is happening, right? Nobody is using an emulator, simulator out there to use your product. This is not so different compared to traditional software testing for web applications, you should keep this always up.
Whenever you develop a new feature, teams should ask themselves on which level we would like to automate? Unit level API and so forth and so forth. What's user churn issues? This is something that we should automate from an end to end perspective, and this should always be at heart.
Then of course what I usually do is tell the people do lots of manual testing, as you just mentioned. Manual testing is really important because there's so many factors that you cannot simulate or code in scripts that are really hard to tackle. I think it's also the combination of things. Back then, when I started, cloud testing wasn't a thing or cloud providers wasn't a thing. We were building out our own cloud devices, which was tedious and exhausting.
But now it's much better, we have so many great services that companies can pick and choose from to integrate this into their testing activities on real devices. Maybe in combination with crowdsourcing or crowd testing, this could be something to get more people from in the world who use the product from an end user perspective. So that's the mix of things, I would say.
Eden: I know you had mentioned when you first started your career in mobile testing, you had to deal with Windows mobile, you had to deal with Blackberry. I remember back in college at the time it was the coolest thing, it was the ultimate status symbol to have a Blackberry, and it's kind of funny how our ecosystem has evolved since then. Do you feel like the best practices or how you tested Windows mobile devices and Blackberries, is that the same testing approach, the same strategy that you use for testing iOS and Android today? Or was it different back then? How do you feel like the industry has evolved?
Daniel: I would say from a mobile perspective, and from a mobile scenario, the spec, it was the same. People using the product on a small device. Of course what was different between the platforms was the form factor of the device. Blackberry had the physical keyboard, even though they were experimenting also with complete touch devices. But their main USP was always this physical keyboard, so this was completely different in terms of your design, development and also testing, so this was different.
Windows was using this tile approach, it was also a different approach when it comes to the app architecture, how you use the product, how you do system integrations. You could use different system components within an app, for example, and that was different. But overall, from a mobile perspective, it was more or less the same. That was not the biggest here. It was sometimes the hardest part was to configure the devices, to put them in the right state.
Back then, also, Test Flight and all these apps, these weren't a thing. I had a computer for Windows to deploy the app on Windows, a second one for Blackberry because we were using it in an isolated was, as isolated as possible. I had some Macs on my desk, so it was basically working in a end consumer electronic market, so many devices, different systems that I had to use to juggle around to deploy the apps to different devices.
That was actually also the hardest part, the whole device management, it's a huge topic. It got much easier now with only iOS and Android, even though it's still challenging because now we have, for example, watches, gadgets that we can connect with. This is now another level that we have to deal with.
Eden: Yeah, I can imagine. I know at your role in MaibornWolff, you guys are a consultancy and you work with a number of different clients, each of them with their own demographics. I'm sure there's a different mix of this company uses more iOS and less Android, or this company is Android only. So in that kind of fragmentation, how do you guys give recommendations to your clients about how many iOS devices should you test? How many versions back of Android should you test? How do you normally propose a strategy for clients?
Determining Strategies for Mobile Testing Clients
Daniel: Yeah, first of all we like to get information about the user base, who is the user, the consumer of the product. Is there anything specific to it, like a specific demography or a specific usage patterns like we just mentioned. Is it more Android? Is it more iOS? Based on that data we do recommendations. I think at the moment there is no client that's just saying, "Okay, iOS only or Android only." They usually cover both. We have one client that's more on the luxury end of things, this client is more focusing on iOS because that's what's in the user base.
They're now thinking of maybe we should do Android, but this is in their minds that maybe they can get some more people from there as well. So it just really depends on the client on the brand itself, and then of course on the target customer base. Then it's depending on the client and industry, we offer a mix of the things I just mentioned. We do lots of automation, do lots on also the emulators and simulators and other services because it will need to deploy on a real device to lower the burden of things and to speed up things.
But we never say don't go without real device testing. Cloud should be to test on the real devices, always a must. Some of the clients say, "Okay, we don't want to use crowd testing because we don't have the capacity to manage the crowd as well." Imagine testing your product on, maybe let's say 1,000 crowd testing people, you also get feedback and you have to handle this feedback. So that's why we say, "Okay, let's find the 10 most used devices among your customers, get those devices either internally..."
Some of the clients are like, "Okay, we need to do everything in house. We're not allowed to use cloud testing providers, for example." So we have to get all the devices, maintain them for them, and if they are more open we can also use cloud devices, cloud services, and then we use that. But finally we also have to test in the wild, outside because some clients that we're working with are, for example, car manufacturers and if you're developing an app that is connected to your car, you have to go out.
If your office is big enough to bring the car in, this might be an option too. But usually you have to go outside to test in the car while somebody is maybe driving, so you're the passenger, or while you're standing in front of the car or next to the car. This poses another level of challenge because you not only have to configure your device, but you might also need to configure the car.
So is the car in the right state in case the team is developing the complete stack for a mobile device, for a mobile app, to the backend service and APIs to that car? For example, that's a whole chain of end to end testing that you have to keep in mind, and this usually is also not a job for only one person, for one testing person because there's just too many stacks, too many layers. The complexity is just too high, so you have to work together in a team to solve that and to handle the challenge.
Eden: Yeah. What you were describing about the car player, Android Auto application that you guys support testing, it's actually quite similar to an experience that we've had at Mobot where this customer, in the iOS app they have an Android app, it connects to Car Play, connects to Android Auto. Mobot uses our robots to run a basic suite of tests, connecting to a head unit that's not in a car.
But after our testing process finishes, the customer has a separate QA team that is literally going out and driving real cars to make sure that everything is working end to end, the firmware on that car, the head unit is the right version. Everything needs to work end to end. So yeah, I can totally see how it creates this whole other dimension of testing that even if you have best practices with software testing on the automated side, there's still this physical, real world component that's really interesting.
But it also means that that product is more unique, it's even more compelling because it's integrating better with people's lives, so that's really cool to see that your team is able to balance both the real world constraints, but also the traditional software best practices. I'm curious, you guys have seen so many different clients, so many different apps, do you feel like every client you're able to take the same recommendations from client to client to help them build the testing infrastructure?
Or is there any examples of a client that have surprised you, where it really needed you to rethink the way that you think about best practices? Or even maybe goes against something that you recommend in your book?
Daniel: What sometimes surprises us is that we have challenges of the tech stacks that the clients are using, so we have to adapt to those. It's not only the greenfield where you say, "OK, let's use the standard tools for mobile to get kick started." Sometimes they have legacy systems that need to get connected to it, so that's a challenge definitely, that's something that we need to find solutions together with the client, to find tailored solutions.
In case that we see that this doesn't make any sense, we try to recommend the client, "Hey, this might not be a good idea because of X, Y, Z. It adds a lot of complexity to it, it's not maintainable anymore and it doesn't make any sense." This is, of course, what can happen. There's one client now that's quite challenging, it's that it's also again the end to end perspective. It's not only the mobile app that we are going to develop and test, but it's also the whole API interfaces, not only to the backend system which is a cloud system.
It's also connected to an IoT device, so we have another layer on top of things. IoT microservices, code running on limited hardware resources so it's not something like, "Hey, we go to the cloud, let's deploy, let's get some more capacity, just spawn a new server, get more capacity." No, we are really limited in things. There was lately an issue that we found out from a programming perspective, we were using the wrong variables, the wrong datatypes so we were just consuming too much storage so we had to rework the whole codebase and the whole testing infrastructure again to fit on the hardware devices.
So that's really challenging. Then of course the recommendations that I'd usually do in the book and also talks is, yes, it's a starting point where we can start off. But there's always it depends, we need to go this way. Sometimes that's even some experimentation. One client that we had that was really good in building hardware devices, but back then they were stupid, dumb devices.
They were not connected to the cloud so they were not connected to a mobile device, they had no Bluetooth interface, no Wi-Fi. This was again greenfield for the client so we had the chance to do some research together with the client which was great because then we can experiment again. We also, as a company, can learn again from that client how to put things which can lead in the long run to new recommendations, so this is something that we explore.
Eden: Got it. I know when you were first writing your book, since you were one of the very few experts in the space at the time. That was where these device farms, these virtualized device farms. They are real physical devices plugged into racks by USB, and I think that was still gaining adoption, gaining traction when you were first one of the few experts. 10 years ago, that started really gaining steam.
There's also XEUI, Espresso, these testing frameworks are getting better all the time. There's Detox, there's Maestro, there's all these new frameworks that are popping up for cross platform, cross functional apps as well. I'm curious what your opinions are of some of these newer tools that have popped up? There are even some of these no code, UI automation recording tools that have appeared. I'm sure you've tried many of them, but what is your perspective on these tools? Where do you think this industry is heading?
Trends in Mobile Testing Tools
Daniel: Yeah, there's a lot of stuff going on at the moment, right? The newest kid on the block is Maestro you just mentioned. I was hooked as well. I still like it, what the team is doing there. So when I saw it in the first place I was like, "Nah, this cannot be true. It cannot be that fast, that easy, to set it up and get it running." But it's actually true and that's something that I really think is a great thing because using Appium and installing Appium can be a tedious job and is complex, it takes a lot of time.
Also what we see at our clients, for example, some companies of course they don't want to spend too much money for quality so they should test on these kind of things. Which is okay if the developers are doing the testing, at least to the extent how they can do it. I think that that's great, to have tools such as Maestro, for example, because it's easy to setup, it fits perfect into the development tool chain, and this is the same for XEUI, Espresso, because these are tools developed for developers basically, so it's easy for them to use too.
I think it's important to lower the burden and to lower this perception of, "Ah, testing is boring. I don't like testing. Testing is something that everybody can do. It's easy." Which it's not, and that's important, so tools have to be easy to use.
Also, this goes then to the direction of this no code approach and recording aspect, which is okay for an app that has a really simple UI and really simple interface and is maybe connected to the operating system or it's not connecting to other third party devices, like IoT devices or smart watches. This might be the challenge for them.
The tools need to be easy to use and that's the direction the low code, no code tools are going because I think that also, for example, product people or designers can then use the product so they can already, when they think about requirements, they can also start out writing testing scenarios or even record them already on a prototype. Then later on we replay them in the tool. This is something that can happen.
But for companies that have a bigger product that is running for years or is here to stay for years, I usually recommend to invest time in the tool selection, to make up their mind what is the tech stack at this point in time, what are the tools that you would like to use. I think there is no single tool for mobile that you should use and rely on, it's always a mixture of tools.
As you just mentioned, you have Espresso, EXUI test and other unit testing frameworks that are out there for mobile, then you should combine those tools to have a set of tools that gives you the best possible solution. There is no single bullet, there is no, "That's the tool chain or stack you should use." It depends, again, on the app and what is also the purpose.
I've also seen apps that have been developed for a couple of months or vanilla apps that have only a purpose, or maybe an event for example, or for some marketing commercial wise. There you don't need to run the complete test suite and establish long running test suites. Maybe it's fine to do only manual testing here and then good to go. So this really depends.
Eden: Yeah, I think your point about there's no one solution that's going to solve everybody's mobile QA problems, I think that really resonates with me because that's totally what we've seen as well. We work with a number of customers at Mobot where they do use Detox tests, they have a bunch of Appium tests that have been written out.
Those are the tests that are running on every single commit, they're running on every pull request, but that's not a replacement for manual testing. You still have to get in the car and drive with the actual Android Auto activated, for example. But there's a chance to also have automation for the deep links, the camera stuff, the push notification stuff. That's Mobot's bread and butter, and so the right portfolio is, like you were saying, a number of different tools and it really does depend on the way that that application needs to be used. That's why, exactly like you were saying, no one size fits all.
Daniel: Yeah, exactly. And just what you mentioned is the operating system integration just becomes more important to cover, because I think that users are getting lazy in using apps. Just like myself, I have a ton of apps installed but I may only use maybe like 10 or 15 apps on a daily basis and usually I use them when I have, for example, a push notification or something to react on.
This is something that it's a really hard challenge to tackle in automation, because most of the tools, they are connected to or tied to or focused on their sandbox or their plot within their sandbox, which is the app so they cannot leave the app context which is really difficult to do. But this is a really important thing to keep in mind, and a push notification is a huge traffic generator. On Android it's the best thing to get attention on the app, iOS as well but Android is a bit better because I think the notifications in Android is more powerful in terms of the feature set.
This is really something whenever you have an app out there that is using push notifications or can integrate to third party native apps, for example the Calendar or I don't know what. You have to use some sort of solutions to get out of the app context and do automation as well, and I know that mobile is exactly doing this kind of stuff with real robots, real interaction. That's just great. That's something that it's hard to do. I don't know any tool actually at the moment.
Eden: Yeah, that was exactly what I struggled with as a product manager and why ultimately I started the company so it's really validating to hear you say that. I have one more question for you, which is given how complicated the world of mobile is becoming, there's experts like you, I think you have a lot of intimate knowledge about what makes mobile different than your web app testing, your desktop app testing, RPA tools.
There's a lot of other stuff in our industry that doesn't actually apply to mobile. What are ways that you think professionals like you and me, can help advocate and articulate and communicate the differences between mobile and other tech stacks, so that people start to realize actually mobile is really complicated? There's all these real world things, what could we be doing better as an industry or as professionals to start communicating that to our peers or other engineering teams that don't work on the mobile product or other product managers? Because that's something that I'm always working through, is trying to spread awareness about how different mobile is from other tech stacks.
Advocating for Mobile Testing
Daniel: Yeah, the example that usually I bring is what are the jobs or the jobs to be done that you are doing on your mobile device? What kind of apps are you using? What are the things you do? Looking at my own habits, I use my mobile phone for everything. Banking, travel booking, gaming, meeting friends, chatting and this kind of stuff, and it's all happening here on this little, tiny device. Basically it's like our extended brain, and the software that is running on that system has to work out.
It has to work because it's handling all of our sensitive data and this is something that people should be aware of and also something that you can see if you go to App Store ratings.
Mobile users have a really high expectation when it comes to mobile apps, so they spend a couple of hundred euros, now it's almost 1,000 euros or dollars to get the latest gadgets out there. They expect high quality from a hardware perspective, and this is the same thing that they expect from the software running on it.
They don't want to have a crappy app out there that is doing the job halfway. Also it's easy for mobile users to, with a single tap or just a few taps, to go to the competitor and get another app that is working much, much better than yours. That's the thing, mobile devices, you have to go outside, go to a train or a park, everyone is looking into their devices which is sad on the one side, good on the other side for us as mobile people to work more towards those technologies. But that's the thing, the mobile users are expecting a high quality and that's why it's important to focus lots on mobile.
The funny thing, well, it's not funny but it's surprising, that still a lot of companies don't invest enough time and money on their mobile apps. Sometimes they start with their web application, of course that's the first point and they have maybe a responsive application, then said, "yeah, that's good enough." But it's not what users expect so this is something that you have to tell them. Look at yourself, how are you using your product? How are you using mobile? And is this the right way to use it? Then try to convince them with that, with a personal story.
Eden: Yeah, I think in 2023 the expectations from the consumers are higher than ever before and we've moved past this world of like, "Oh, a mobile app is just reading something else. You're just reading the content off the screen as text." It's more than that, you're using it on the go, you are using it for a very distinct purpose in a particular time and place and circumstance, so I absolutely definitely see how excited to have another conversation again, maybe in a few months and see how our industry continues to evolve.
Daniel: Definitely. It's interesting too. Now with all the electronic cars, it's getting the key of your car, it's the mobile app, it's your device. The whole home automation is something that is still already there, it's a big thing already, but it's only looking at my peers, my friends, my network, it's that it's only used by techies. Like you and me, we may be doing some home automation with apps that we can control our lights or whatever, but there is a huge potential in the market for my mum and my dad, for example. They have no idea what is there, what technology is available for them to have an easier life, so this is a huge topic and there's a lot of stuff that is coming and we're looking forward to it.
Eden: Yeah. Thank you, Daniel, for joining us on the podcast today. One of the things that stood out to me about our conversation is how you've seen the evolution of mobile and this ecosystem from over a decade ago with Windows and Blackberry to where it is today. It's interesting to hear that there are some differences in the tech stacks, but a lot of commonalities still in the way that we should approach testing and the balance of automated and manual. If our audience would like to read your book or check out your blog, Adventures in QA, where should they go to find all those resources?
Daniel: Yes, it's exactly AdventuresInQA.com. You can find everything over there. My book, you can get it on Leanpub, it's Amazon available as a paperback or if you prefer ebook, you can get it is as an ebook. You can also find information about my YouTube channel, the blog posts, the stuff that I'm doing, conference talks, things that I'm going to do this year, and of course Twitter LinkedIn. Feel free to approach me, happy to help out.
Eden: Yeah. Thank you for the role that you've played in really growing this industry. I think, yeah, I wouldn't have been able to start Mobot without the resources that you shared early on, back when you were one of the very few people actually writing about mobile software testing. I'm excited to continue to see our community grow over the next few years, but I'll always remember how you pioneered a lot of this and everything that we talk about today came from a lot of your early blog posts and your book, which is really cool.
Daniel: Thank you. Thank you for the kind words.