By Shmuel Gershon
There’s a very active Brazilian software testing discussion mailing list in Portuguese, called DFTestes.
When I say “very active”, I mean an average of 215 messages/month in 2009, and January 2010 has got more than 404 messages. Compared with other mailing list I participate, this is the most active one, and a tip of hat goes to the testers and list-admins that make the discussions interesting and vibrant.
Recently a big argument was held (link in Portuguese, requires registration) on whether “testers need to know programming” or not. I made some contributions to the discussion, and I’ll try to translate my point of view to English here.
Please note that there were a lot of people participating, and the post below does not present the entire debate or even an entire opinion, just the messages I wrote as answers. I hope you enjoy them.
The proposed question was:
Do testers need programming skills? Until a few years ago, most people would definitely answer NO… But I’ve known both people that seek testing jobs as an alternative to coding, and testers that write code routinely. So, do testers need or not to know programming?
Do testers need programming skills? My comments:
The question as it is posed takes some assumptions that may be incorrect: Saying that “until a few years ago, most people would definitely answer NO” is like deciding the answer nowadays is YES ;).First, from other discussions on this (there was one not long ago in the software-testing mailing list too), it sounds more like whoever used to say “NO” a few years ago still says “NO”, and whoever used to say “YES” still says “YES” :). There was no significant technology or methodology novelty that caused a change in the approach to this (Cem Kaner says testing practice has evolved only a little over the years).
What does happen is that some people confuse the software testing we (software testers) do with the unit testing and other techniques that enhance and support development (and are mostly done programmatically). It is true that many testers work on this kind of tasks in their day-to-day, but I believe this is far from the essence of our job — and the distinction of testing and checking by Michael Bolton may help understanding that.Second; if anything, the advance of years made testing without programming knowledge even easier. Jerry Weinberg describes the tests he did at the beginning of his career as written in punch cards on assembly and given for the machine to execute. At these times, it was much harder (bordering unfeasible) to test an app without knowing to program in the language, or details about the data types and system limitations. With time the advancements in interfaces made it, if anything, much easier to test without programming knowledge. Moreover, most languages are today capable of more or less the same things, so when testing a web page most tests will be similar if it was written in PHP or ASP.NET, and many tests will be similar for an app written on C# or Delphi. And in order to learn about internal details, you don’t need to know the code either — memory, performance and process analyzers can help with detailed views about the resources used.
That’s not to say a tester should not know how to code :). Programming has many advantages to a tester:
Being able to understand the type of problems that afflicts applications (knowing “what the bug is doing to the program” ).
Knowing about how software is built helps when building a mental model of the software functioning (where does the program makes decisions? when does it asks for resource? how does it communicates with libraries?).
Knowledge about the internal code of the application being tested does not necessarily helps, though. At times, this can be harmful as it limits the tests done more than it expands them. But this is another discussion, on the values of Black-Box testing and Glass-Box testing.
Gives the option to automate tasks or functions when these are needed in a test (automated tests, automated loads, automated performance probes…).
And as important, the creation of little tools that aid the testing process (helping quick configuration of a system or gathering data for reports) can help the whole team, and make a tester an important player in the group.
It helps communication with developers. Programmers are human beings that are easy to talk to when you know their language :)… Being on the same speaking-context as them really helps. And as it happens in many other geek linguistic elites, some programmers will respect code-speakers more than non-speakers.
So, should testers know how to program?
Well, it definitely helps in many facets. My non-authoritative answer is that it helps so much that no tester should ignore this area, given the opportunity.
With so many advantages, should we hire only the testers that know how to program?
Cem Kaner commented once that he “will not accept M.Sc. or doctoral students if they have weak communication skills. My experience is that these are more important, for success in my lab, than programming skills”. Kaner’s vision, on the articles that he writes about recruiting (this book, and this article), is that the teams should be heterogeneous and not everybody should have coding backgrounds — the “domain knowledge” about the area and business of the application are essential for effective tests.
For example, when building a team for testing an app that does stock market transactions, which kind of player would you recruit? People who know how to program, or that know the stock market trade? The best answer, I believe Cem would say, is “people from both sides”. Personally, if I had to choose one of the extremes, I would pick the stock market expert. Hey, but he doesn’t know how to code!! Well, then end user doesn’t know either.
In the comments to Testers don’t think like Developers think like Computers I was asked about recruiting programming testers, and this is the answered there (notice the example reuse ):
Coding isn’t a required background for testers. Coding skills will help on coding tasks, and sometimes with system understanding.
But a diverse background will make the team thrive and explore the system in fruitful ways.
A domain expert can bring much more value than a B.S. in Computer science.
For example, when building an application for trading stocks, a tester with a buying/selling stocks background will find valuable information.
(See lesson 235 in “Lessons Learned in Software Testing” by Cem Kaner, James Bach and Bret Pettichord – “Staff the testing team with diverse background”).
Of course, there is no 1-size-fits-all, and different projects may require a different team profile. Just as different managers and different contexts may.
In essence, I would agree that teams should be encouraged to hire non-coder testers. Or, if not encouraged, the practice should be considered perfectly acceptable.
When I instruct testers, one of my first instructions is that operating system, programming language or data base limitations don’t matter. Be careful of *why* the software does the things it does… What matters is what the user wants, and knowledge about technology limitations can often come in the way.
In his article “A Letter to the Programmer“, Michael Bolton comments that the limit of text entry in the chat window of Skype is 32768 chars and a little problem he found with that. In the comments, a programmer quickly jumps to say that 32678 is the standard limit with the text_box element! This programmer did not notice that this doesn’t matter. If there is a problem here, a user doesn’t want to know about API limitations — and testers with coding skills risk “believing the system” instead of “defending the user”.
But what about TDD and Unit Tests?
It was commented in the thread that job opening notices more often than not require knowledge in coding language and in automation tools. From this, one sees that the focus on automation is very strong when recruiting testers. But, is this due to a sincere attempt to increase value? This tendency can very well be focusing cost reduction instead (which is good too, but can be harmful when done wrong).
This brings us back to the confusion commented above about the difference between “real tests” and tests (checks) done during programming that enhance and support development (Unit Tests, TDD suites etc). In this video commented on the thread (here, in Portuguese) a programmer explaining unit testing and TDD maintains that if you have great automatic tests that support your code, you don’t need a software testing team.
Ouch, that is a big mix-up!
When TDD tests are mixed up with the tests software testers do, testers are being approached as a programmer’s baby-sitter, looking over the developer shoulders to tell about coding errors. But our real function isn’t (solely) checking whether the “if x < 20″ was coded correctly, but to ask whether “20″ is enough, and in what extreme business conditions there will be 23 required, or when each of these 20 data-structs will need to be bigger.
Programmers make well to test their code thoroughly before releasing for testing, that way we can skip the dumb tests and focus on the real questions the customer is not asking. As it happens, for these questions, most automatic checks are less valuable than manual tests — be it because the automation will not find a bug again, because these tests do not evolve together with the system, or because the automation is not thinking beyond its code. If one could automate everything a tester does and think, then it would have been done and we testers would be having an easy time supervising robots :). But you can't, sapient testing is a human's thing. See what James Bach writes in his article “Manual Tests Cannot Be Automated“.
(Last, full disclosure:
– I have a computer systems development engineer degree, and worked as programmer for many years. Knowing how to program in many languages helps my testing job every day.
– Unrelated to this post, and not depending on me :), most new hires in my department at work have background and/or formation in programming, this is a prerequisite.)