By Shmuel Gershon
(Note: This post,originally from July, was re-written in August. Only format/wording changes, with additions to make it clearer)
This is an interesting topic:
I’ve been involved lately in many conversations about uTest, or more specifically about its model.uTest is a website where companies can post their software, along with some guidelines on focus areas, and users around the world can download the app, find bugs and get paid for bugs reported (as long as the bugs are accepted by the posting company).
There is a lot of confusion/discussion around the good parts and the bad parts of the model, so I will share here some of the points I had taken from these conversations (thanks to all the friends who shared insights with me)… Some attentive readers will notice the article is an almost copy paste from a reply in the software-testing group.
Please note that I am not saying “uTest considered harmful” or “don’t use uTest” or anything like that.
Please note that I am not saying “uTest is great” or “use uTest” or anything like that.
All I want to point here are some of the strengths and some of the weaknesses of the model, so every one (both testers and companies) can decide for his own context. I welcome debate over any of these points, and will update my post accordingly.
–> (A) Here we go then. Some strenghts of the model:
Crowdsourcing provides diversity:James Whittaker in his “Future of Testing” webinar considered this model of testing (there I heard the ‘crowdsourcing tests’ term the first time) the next evolution in software testing, as it allows a huge number of testers to test in very diverse environments and configurations.
Full disclosure: The Webinar was given for uTest and at their website.
The managers are nice guys trying to create a nice community.In the case of uTest, they have blogs with guest writers such as James Whittaker and others. They are on twitter and occasionally twit a testing article (more often they do brand promotion, which is fine too), they offer bug battles and discussion forums.
And at these venues they do good. I have heard that Doron Reuveni is a really nice guy, and I believe it from his twits and posts.
Of course, people tell me that all this community building is about money and business, but that’s completely legitimate. I take money for testing at my job, too. Making money doesn’t make anything bad.
Simplified development cycle:For some companies, it appears that crowdsourcing makes their development lifecycle more efficient. See this article on how crowdsourcing supposedly slashes development time.
The points commented in the article are a bit dangerous, in light of the concerns raised below, but for companies that have yet to build a testing team, starting to receive bugs before the testing team is complete can be a good thing.
–> (B) On the other hand, the uTest model has some weaknesses, like:
“Software testing” misinterpretation:The idea of this model is ‘pay per bug’. Companies are outsourcing testing by receiving bug reports from the community.
But submitting bugs is only one part of the work that a tester does. Testing (functional too) requires involvement with the development life cycle. Testing requires understanding of the user needs and business needs.
By putting the testers far away from the company and their decisions or business view, the testers have little option to ask good questions. They have little space to provide real information about risk (that’s what testing is about, right?), and thus they are likely to practice poor testing.
Moreover, testers get their money for reporting bugs, so this will become the ultimate purpose of submitting them. In any other context many would agree that bugs aren’t there just to be reported or even just to be fixed — bugs raise questions about the application, about the value it provides and about the general direction the product is going.
When you take the testers out of the fixing decision, you miss this healthy discussion. In this model, if a bug doesn’t get fixed, the tester isn’t probably alerted about this fact. What’s more, the tester may not even care, as he already cashed the bug’s money and “fixing or not fixing is a business decision where we don’t have a say”, right? And with that, reporting well detailed and complete written bugs isn’t a goal for pay per bug testers too: If you spend valuable time writing a bug report with detailed information, you are not making money! Plus, another person may log this exact bug with a poor report before you finish your perfect report (you don’t receive money in this case).
Sure, the model does not foster or defend bad practices on purpose. But it has its way to motivate them, in a sense.
Poor metrics (paying for bug reported):Companies like uTest didn’t invent the “bug quantity measurement metric“, of course. It is an old approach that has failed in many places from what I learnt (might’ve been successful in other places) and is definitely not suitable for all contexts.
Which type of testers receive a higher score at this model? Testers that report simple bugs will be at higher ranks than the ones that hunt and investigate and find hard bugs.
Consider this together with point (C/2)… and you’ll have testers that log a large number of shallow bugs.
This kind of behavior is acquired by whoever practices it… And by the end of the day, some testers with high scores at uTest may have become accustomed to quick and dirty work.And what about bugs that don’t easily reproduce?
Bugs that don’t always reproduce need special attention and a lot of time invested. Forget about those in crowdsourcing, as more attention equals less money… One is risking that the ones to find the tricky bugs will be the end customers.
–> (C) Some other points should also be considered:
Bad disputes fostering:Any discussion translates into money. And money, being a terrible motive :), can make disputes between a company and a tester biased on both sides.
Software companies will want fewer bugs, and software testers will want more. None of the two is a good goal, and both can lead to wrong stands.
Bad attitude from testers causing bad results:Testers from a ‘real world’ project might wish to receive better built software so they can jump into important matters instead of clinging at innumerous surface bugs.
In the pay-per-bug model, testers will leave a mature software to go test an immature one (more bugs = more money)… so the tester tests worse apps and the software company loses the testers exactly when it more needs them (when the app is stabilizing, testers will migrate to a new, immature one).
Disruptive competition between testers:In the company I work for, if I share information with a colleague and he uses it to find a bug before me, everybody is happy (as there was a collaborative effort that generated a useful piece of info). Testers in a pay-per-bug competition will be less inclined to share information, as bugs opened by someone else mean less money and less score points.
In their model, you compete with other testers, and this enters the testing culture of the participants. And testers who compete are les useful in a team than testers who collaborate.
The score feedback isn’t objective:An award winning uTest tester commented in one of the online discussion that the score for testers doesn’t represent testing skill.
The skill set required to be a good tester isn’t exercised in such simple view of testing as ‘receive the app, try it, list the bugs’.
But this is an approach where one cannot get more reputation by learning more skills. Instead, one has to wait until someone decides how good he is, with few chances to appeal. Different people in different companies think differently, and the tester has no control over this process here.
Companies get the wrong picture:A large amount of bugs and activity from uTest-like websites is likely to inebriate the software developing company into thinking they’re getting great value (just as a large amount of activity around automation does in other cases).
‘Unfair’ and harmful competition with your in-house testers:For companies that do have testers in-house, sending the application to be tested at crowdsourcing models may pose additional problems:
– How do the ‘inside’ testers feel about that?
– What happens when the outside users find bugs they missed?
– And for the same bug… how come someone from the outside get paid for reporting it? What conflict of interest may appear when a tester realizes he can suddenly make money if he (or his wife, or his brother) reports a bug he found during work?
– Does a bug-finding competition affects how the in-house testers do their work?
Update about different types of test:The first version of this article said that “There are companies that value regression tests, for example. As these usually generate fewer bugs, testers at pay-per-bug will not perform them. The same is true with other types of tests that raise questions and not bugs.”.Update: I recently discovered that companies that want to run regression or a set of specific tests do use uTest. They post the application with a list of scripted tests for people to follow, and pay for the completion of the scripted tests. So there are ways to do these tests.
(However, in one such example that I looked at, there were no bugs reported in the script executions. What would be the results if instead of paying for the tests passing, the company asked for bugs? But this problem isn’t exclusive of uTest, it happens everywhere, I guess )
There may be more and other points for and against the model.
Please write them on the comments or in a private mail, get the conversation going on this. I may change my mind on this matter too, why not?