Testing Zombie – Are you becoming one?

Slowly and steadily, with reasons or without, deliberately or out of innocence, many of us are becoming the Zombies of the testing world. Some people become part of this immediately after college when they start the job and some become as they grow up the ladder and start losing interest as they lose the aim and motivation.

Who is a Testing Zombie?
A Testing Zombie is a person that has been turned into a creature capable of testing without any rational thoughts.

How to find a Testing Zombie?
It is easy! To find a zombie around you, just look for the following symptoms/ characteristics

  • Is (s)he just coming to work as (s)he has to and does not take any ownership of actual testing?
  • Is (s)he hiding behind processes instead of testing?
  • Is (s)he pointing fingers on others instead of owning the failures?
  • Is (s)he does not understand the product functionality under test?
  • Is (s)he does not understand the technology behind the product under test?
  • Instead of owning the release and working in collaboration with product stakeholders, does (s)he wants to block the release to avoid future failures or blames?

Though, the above is not the only comprehensive symptoms list that can help you identify a Zombie in testing, you can add more negative behaviors to this list. But, if the answer to any of the above is “Yes”, you have potentially found one around you.

Types of Testing Zombies
These zombies can be found at any level in hierarchy. I have defined only the 2 broad categories below and will leave rest of them onto you to define.

  • Zombie Tester: These are the people who, land a job in testing just because they did not get anything else to work upon. There is no inbuilt motivation in them to work as a tester. They start working on the project as they are assigned one, execute the test cases that are written by others, word by word, point to the test case author for any missing step and reasons why they missed anything during testing and try to look good as a tester. They only have one mastery and that is to how to blame others for their misses.
  • Zombie Test Manager: These people, though have grown up the ladder in the organisation based on the years of experience, but do not understand the basics of the need of testing. Test Strategy and Test Plan for them means to fill up the information in  the templates. They are highly diplomatic and use their team(s) to run the show till the things mess up. They do not understand the complexities involved in the product, both at technical or non-technical front. They have the mastery to hide behind processes and create a hidden trap which is hard to break. These are mainly “Yes Boss” kind of people and do things mainly to please their bosses.

These kind of people are increasing in numbers at an epidemic rate in the corporate world and the time is not far when we all might be surrounded by them and it might lead to the end of the world for “Testing”! With time, we might also be bitten by one such Zombie and  will be pushed to become part of them.

It is a high time and imperative for each one of us to do some retrospection and see if we want to correct our paths so as to save ourselves in becoming a ZOMBIE!

‘Smoke’ test to identify ‘No Smoke’!

I was attending a conference where people started discussing about the reason behind the name for Smoke tests.

Someone told that ‘Smoke test’ term came from electronics where it defines the testing that is done when the circuit is tested first time with the power flowing through it. It is actually to verify that ‘No Smoke’ is seen in any part of the circuit to ensure it is completely correct.

In software we actually use Smoke Tests to quickly verify the critical areas of the software to ensure that the changes have not broken the software in any form. But why is it called Smoke test in software? Does the software produce smoke if it does not work correctly?

Per Wiki, w.r.t. Software, Smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical.

So Smoke tests are actually to verify that after the integration, circuits/software behave correctly and are ready for further testing if there is no smoke coming out of it.

I was just thinking that instead of calling them ‘Smoke Tests’ will it make more sense to call these kind of tests as “No Smoke Tests“!

Please feel free to share your thoughts on the same.

Be a tester first

I was talking to a group of individuals from testing community. They were boasting of the things that they do as part of their testing assignments. I am posting the conversation that I had with them.

Me: What do you do to test a system?

Group: Validate all the requirements that are provided in the form of specifications, UI mockup, rules etc and certify the product for quality.

(I was a bit overwhelmed, smiled and congratulated them on their achievement because as a tester they are doing so much and also certifying the product for quality.)

Group: Why are you congratulating us. This is part of our job that is given to us as a testing team.

Me: If you are doing so much, your client might be delighted about the quality of work that you are doing.

Group (a bit nervous): NO

Me: Why?

Group: Our client does not value us and often says that the testing team is not able to identify bugs that are being reported by the actual customers. We are testing and certifying whatever is provided to us as requirements, still the bugs come.

Me: So you are not doing ‘testing’

Group (confused): We are doing testing. We are verifying everything. We mostly achieve 100% requirement coverage in our testing.

Me: Wow. Then why are bugs coming in?

Group: Because the user can do anything. He sometimes go through the scenarios and does not know what are the right values to put in the fields. Hence it sometimes crashes the system.

Me: Didn’t you tried out those scenarios?

Group: We tried some of them, but on discussing with the development team we were told that these are invalid scenarios and the user will never use that.

Me: Wow. So the developer was creating it for some particular users and they already collaborated with them and told them not to run those scenarios on the system?

Group: No. The system was being developed for the users who will use it on internet. The developers never had idea about who the actual user would be.

Me: Then why didn’t you question that? Who are developers to decide that the user will not use the system in that form?

Group (Whispering): It is how it works.

Me: This is where the problem is. It is often seen that you, the testing team, is being driven by the development team. How can they know that user won’t be using the system in the form that they could not think about or are not able to fix? This is where you as a tester can add value to your clients. Before proudly certifying the product for quality, which is not your job anyhow (Read post), be a tester first!

You should never

  • Remove your thinking caps
  • Be driven by documented requirements
  • Be driven by developers
  • Stop testing once you execute the already created test cases. Read Michael Bolton’s Testing vs Checking series

Group: hmmm. But how should we convince the project manager on this. This might require extra time as well.

Me: Good point. It is not always easy to convince the stakeholders on time. There are two ways. Either you become rigid on implementing what I told above, which is definitely NOT the right approach or you can prove your point first. With proving I mean, show the benefits to the stakeholders that they can reap with the new approach. Showcase how the bugs are getting reduced post delivery. Practice the testing craft and keep on making the complete process more efficient with time. This will add value to what you do and will also be helpful in convincing others.

Group(thinking): Well we understand what you are saying but still not sure if the client will agree. We will try to practice what is suggested above.

Me: Thanks guys. This was a great transpection session that we had. At the end I will say

Think beyond boundaries. Take Control. Don’t allow anyone to define boundaries that you should confine yourself in while testing. – @VipsGupta

P.S. Thanks to Michael Bolton and James Bach whose transpection on twitter inspired me to write this down.

Darr ke aage jeet hai

A typical product testing scenario

The final round of testing was going on in full swing in a Product company and the team was under pressure to certify the product for release.

The Project manager was a smart person who, to fulfill his interests, wanted testing team to certify the product with minimal testing as otherwise he would miss the deadline to launch the product. He was playing safe. On one hand he was progressing to deliver the product to the market, on the other hand he himself was not to loose anything. Later he could easily put the complete blame on testing team if anything crashes in production.

The test manager was reporting to the Project manager and so he had to follow the orders that he got.

Now as the testing was proceeding further and the critical areas were being tested, the team found a bug.. a critical regression issue that got introduced due to the fixes made for another bug. The team was in a dilemma, if they reported the issue, the Project manager will be literally killing them, but if they don’t report the issue, the customers will kill them. As they were struggling to get out of this situation, the test manager came in. Everyone asked him, what to do?

He was very clear in his thoughts and what was expected from the testing process. He said, we are into testing business and we are providing a service to the development team. In case we don’t report this bug, it will be that we will not be doing our duty. If we don’t report this bug, it is such critical in nature that the customer will report it and we will be in soup again. It is better to report something early than later.

He asked the team to report the issue in bug management system. The tester was very much afraid to do so, but had to follow the orders. The project manager came in and shouted at the tester as he was surely going to miss the deadlines. The project manager went to his room and talked to development team to review and fix the bug. The development team found the possible cause of error and fixed the same.

Now the testing team again tested the issue and the affected areas and found another critical defect, which went to development lab for fixes. The Project manager was getting red as each minute was passing. The testing team was not following the orders!

To safeguard his interests, he wrote an e-mail to all the stakeholders. As the testing team is keeping everyone busy and finding the bugs at the last stages of the release cycle, we need to postpone the release by a day.

The stakeholders called up a meeting and discussed the issues. On analysis, it was seen that the testing team had nothing to do with the stage at which the bug was getting caught. As the bugs were arising due to the fixes that the development team was making. So the testing team was no one to blame.

Can you predict what happened at the end?

Few days later, all the stakeholders praised Test Manager and testing team in front of the whole group and told that if they released the product with those defects, it might have pushed them into legal battles with their clients.

What is my intention of writing all this?

The testing team often faces pressures like the above one in any PDLC, yet they need to be persistent in following the mission that is defined for testing. There will be various people on the top who will sometimes try to build pressures on you, but remember the fact that though these will be decisive times for you, but never divert yourself from the mission that you are on. Believe me, doing this, you will not only get self satisfaction, but will also gain respect of others. It is rightly said “Darr ke aage jeet hai”

Webinar on “Improve your test effectiveness using fact based test prioritization”

This week Impetus announced a webinar on “Improve your test effectiveness using fact based test prioritization” that is scheduled for May 21.

For more information http://www.indiaprwire.com/pressrelease/information-technology/2010051250496.htm

To register visit http://www.impetus.com/webinar

Top challenges faced by Product/Test Owners during Product Testing

Performing software testing looks always to be an easy task for an outsider who does not belong to testing community but for most of the product/test owners it has always been a challenge to do it RIGHT most of the time. There are various instances of defect leakages in the past that have caused multi-million dollar losses to various companies and agencies. For a more detailed list of such issues you can visit site (http://blogs.zdnet.com/projectfailures/?p=427)

The product/test owners who are part software PDLC often face the following challenges that are either addressed by them internally or they need to hire external consultants to overcome them.

#1 Are we doing the right thing the right way?

#2 Is my testing team doing enough testing?

#3 Wish I had more budget to test more…

#4 The product has to ship tomorrow not sure if will have tested it enough

#5 Is my product of acceptable quality?

#6 Shall I outsource?

With all the above challenges, the idea is to understand whether testing is taking the correct route and leading to a greater understanding of the product, its strengths, weaknesses and market potential.

The first question that comes to mind of a product/test owner, whenever any testing assignment is started is “What to test”. It is quite a tedious task and needs a thoughtful thinking while deciding on the areas that needs to be tested for delivering a better quality product. If we do not define it at the starting, it becomes a very painful task going forward and various stakeholders keep on guessing the outcome of testing.

Once it is decided as to what to test, the next challenge comes in “How to test” It is indeed very important to select the right tools and activities amongst the that should be performed to do the right kind of testing to achieve desired test coverage. Desired test coverage! Did I talk about this?

This is again another challenge to decide on “how much to test” so that the test team can say that the product is ready to ship. It is tough to clearly differentiate and define the exact time of closure of testing activities. Once the scope and methodology is selected and decided upon, the next is the identification of Risks that needs to be identified upon and a mitigation plan needs to be formed for the same. Risk identification should include risks throughout the life cycle in the areas like cost, schedule, technical, staffing, external dependencies and maintainability and should include organizational and programmatic political risks. Risk identification need to be updated regularly. Risk mitigation activities need to be included and communicated to various stakeholders.

Even if one has clarified and answered each of the questions that are discussed above, one doubt still remains throughout the testing process. It is “Will all the bugs be reported during testing”. Though everyone knows that Testing does not guarantee reporting of all the bugs, but effective and efficient testing does guarantee the reporting of all the critical defects that exist in any product for its smooth running after it goes live. Still, the product/test owner has to face a lot of heat if even a single defect is found by end users after going live and there is a question mark that is put on the complete testing cycle. So it becomes a big challenge to find out the bugs during the testing cycle.

Product/test owner has to determine at the outset whether their teams are doing enough testing. This is not as simple to identify as most of the time product/test owners do not have complete visibility into the testing process that is being followed by the testing teams. There is sometimes no way to identify whether the team is spending the time correctly on various tasks that are being assigned to them. To identify that the test team efforts are being put in the right direction for getting adequate test coverage is a big challenge. If these efforts are not measured and corrected at the right time, then it might create a bigger problem at later stages and the overall test coverage achieved might not be up to the expectations of the stakeholders. And after this, to validate the product status that has been reported by the team is also a challenge that is hard to meet.

At the same time, product/test owner have to examine whether the product is finally market ready and of acceptable quality for clients.

One of the major challenges faced by product/test owners is to identify the correct budget for testing activities. It is often seen that there is not much introspection done to decide on the testing budgets and they are fixed more on the random basis e.g. fix testing budget as 30% of development budget or so on. But for a product/test owner it is tough to plan (as much as possible) for the budget that will be required during testing of the product? The correct identification of the correct tools, resources, skills, required during testing, remains a major bottleneck for the identification of the correct budget.

Finally, they are also grappling with the question related to outsourcing and considering outsourcing as an option when drawing up testing strategies. But for the beginners, thinking outsourcing has its own challenges like how can I know who is doing what within the team, does the team understand the product domain and functionalities completely so as to take the testing assignment and whether they value my IP and will it remain in safe hands even if I decide to outsource.

In my next post, I will be detailing out on some of the possible solutions that if implemented within any testing team, can help in overcoming the challenges listed above.

Do feel free to put in your comments on this post.

Software Testing to Software Test Engineering

Why did I chose to call it Software Test Engineering?

I am a testing professional who started my career as a test engineer 12 years ago. At that time I was being offered the job of a developer but I was not sure why, but I opted to go into QA and testing at that time. My recruiter was a bit surprised on my decision as development was a hot selling cake during late 90’s and no one wanted to opt to go to testing by choice. Actually, it was not the mistake of the individuals who were opting for development. It was a belief at that time that engineers who cannot do good development, often opted to go to testing. So, people who wanted to opt for a job in software industry all wanted to become a high profile programmer.

As per wiki “Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test“.

With time I developed passion towards testing and started finding better ways to do it, be it automation, defect prevention, white box or manual testing. During all these days I actually kept wondering why testing still is considered not so important; why people feel that development is important and testing can be done by anyone. Why developers are often thought as the people who innovate and provide better designs, but testers are only sniff dogs that just smell for the bugs that are introduced because of hard work put in by the development team.

It is a common misconception that testing is not an engineering task and it can be done by anyone who knows reading and writing. I think from one perspective it is true as well as the test engineers need not worry about any new ways to complete their tasks. Their targets are fixed and their performance is often measured based on the execution and bugs that they are reporting. Referring post by Michael Bolton, (http://www.developsense.com/2009/11/why-is-testing-taking-so-long-part-1.html) it describes various ways test engineers are often measured in any project team.

So where is the gap! I wanted to improve testing and get it better adopted in the overall software testing world? The answer was tough, but simple. Bring innovation in testing. Apply the concepts of engineering to bring improvement in testing and finally I planned to call it Software Test Engineering.

As per the definition that I published on WIKI today, software test engineering is the acquisition and application of technical, scientific and mathematical knowledge to design and implement testing best practices and processes that enable realization of software testing objectives desired in a given PDLC (http://en.wikipedia.org/wiki/Software_test_engineering)

I will write more on this topic in the coming days and bring in more clarity to the readers. Till then, just let me know what you feel about this. Good night from India…