How to test on a tight testing schedule?

If you have spent some time in the field of testing, then you must have faced situations where you were asked by your test manager to test the application on the fly and deliver your report in XYZ days [replace XYZ with as few days as you can]! In case, you have not faced such a situation yet, then either you are unbelievably lucky or with all due respect, you have not worked on sufficient projects. But either way, it won’t be long before you encounter such a situation in your career as a software test engineer.

Contrary to popular belief/myth, proper skilled testing requires lot of planning, effort and work, and hence a substantial amount of time. But unfortunately, when projects get delayed, the time planned for testing invariably gets the hit while squeezing the schedule. What? The development team is running 2 months behind the schedule? No problem. Time for the magic trick! Squeeze the testing schedule by 2 months and presto! Congratulations, we have got ourselves back on the project schedule. Cool, isn't it? Well, probably not!

If you have started to presume that this post is going to criticize how project management stinks, then thankfully, you are mistaken. Because, as a tester I believe that a part of my job is to act like a “problem solver”. So instead of whining about how the rescheduled testing time frame keeps killing the poor tester, I would rather concentrate on finding a way to deal with such a situation. When you are hit by such a squeezed testing schedule, first thing that you could do to help yourself deal with it, is to “accept it”. This kind of things keep happening to everybody who works on a software project. Once we accept it as a part and parcel of our profession, dealing with it would suddenly start looking easier.

When facing a short time frame available for testing, you have to make the best use of the time and resources available. Starting testing with an assumption that “we can’t test everything, no matter what”, can really help. Even from an economic stand point it does not make any sense to spend lot of time and energy testing areas of the application where the chances of having bugs are low [this we can fairly tell based on our previous testing experience]. Also identifying areas where the impact would be negligible [based on expected user behavior] even if bugs were found is a good strategy while starting testing. As a rule of thumb determining what to test first and in which sequence, so that you spend the limited time testing areas that really matter, is an important decision that requires certain amount of analysis, intuition, and experience. Start doing a risk analysis to identify functions with the highest risk [thus most important and need highest attention] and functions that would be used most by the end user. Having a checklist to remind you of key areas that you would not want to miss certainly helps. Here is a checklist that I often use when I have much less time than I would have wanted to bargain for testing an application:

» Functionalities that are often used by the users. Start by asking yourself, “Which functionality is most visible to the user”.
» Functionalities those are most important to the project’s intended purpose.
» The most risky areas of the application with the largest safety impact. Areas, which if broken can bring down the entire application to it’s knees. [Talking to the developers for suggestions on the same is probably a good idea here].
» The areas of the application with the largest financial impact on the users (and hence on the project stakeholders).
» Newly added functionalities. Often they are the least tested ones and hence the dirtiest.
» Complex functionalities those are easy to be misunderstood (and hence misinterpreted). Look for parts of the code that are most complex, and thus most prone to errors.
» Functionalities that are based on parts of the requirements and design that are unclear or poorly thought out.
» Functionalities that are developed using challenging new technology, tools, architecture.
» Functionalities that are developed in rush or panic mode.
» Functionalities that demand a consistent level of performance.
» Functionalities that reflect complex business logic.
» Functionalities that require interfacing with external systems (e.g. third party shrinkwrapped software). These are often classic areas to look for integration bugs.
» Functionalities developed under extreme time pressure.
» Functionalities that had recent updates or bug fixes.
» Functionalities developed by many programmers at the same time.
» Functionalities those are most important to the project stakeholders.
» Identify related functionalities of similar/related previous projects that caused problems (in terms of user reported bugs). Correlate them to the current application and try to use it to your advantage.
» Identify related functionalities of similar/related previous projects that had large maintenance expenses. Correlate them to the current application and try to use it to your advantage.
» Identify functionalities, which if gone wrong could result in bad publicity.
» Identify functionalities, which could cause most customer support complaints.
» Devise tests that could cover multiple functionalities/features at the same time.
» Devise tests that could cover high-risk-coverage at the minimum time.


This is clearly not the exclusive list of guidelines/checklist to test under a tight testing schedule. But still, it covers quite a lot of important areas that usually needs attention. Being a context driven tester, I am well aware of the fact that using this checklist may or may not help a tester who is trying to test an application on a jam-packed schedule. However, it has served me quite well whenever I was in need of it.

What do you do when faced with such a situation of a tight testing schedule? How do you react when you suddenly find yourself being stripped off with some valuable testing time at the very last minute of a project deadline? How do you readjust your testing strategy to cope with it? Do you have any such checklist that you follow? I would be delighted to hear your ideas.

Happy Testing…
Share on Google Plus

About Debasis Pradhan

Debasis has over a decade worth of exclusive experience in the field of Software Quality Assurance, Software Development and Testing. He writes here to share some of his interesting experiences with fellow testers.

15 Comments:

  1. nice, qa is problem solver

    ReplyDelete
  2. This post is definitely going to help me. :)

    ReplyDelete
  3. Believe me Debasis, this one of your best posts ever on Software Testing Zone. This is a great list of things to look into when we are trapped between a squeezed deadline. I think even if it is not a case of tight testing schedule, still then it makes a great guideline to make sure we don't forget any of these important areas to include in our testing strategy. I am going to share this article link with all my tester friends and colleagues. Thanks once again.

    ReplyDelete
  4. I had been looking for such a list of tests for a long time. I work in a small organization with a very small QA team. We mostly deal with web based applications. And often we are asked to test websites and submit report within very short time. Sometimes it is very hard to do since we are asked to test without even any solid test plan. I am going to take a print out of this post and stick it on my cubicle wall. I will try to cover all the areas that you have listed when I am asked to test something under a tight schedule. Thanks a lot Debasis. You Rock! :)

    ReplyDelete
  5. Hi Debasis,

    What is the difference between randomly occurred error and sometime occurred error? Please give reply.

    Preeti,
    preeeti.sharma@in.com

    ReplyDelete
  6. @ Preeti,

    Have you read How to Reproduce a Hard to Reproduce Bug and Heisenbug - A Tester's Nightmare? However, to me there is not much difference between a randomly occurring bug and a bug that occurs sometimes! By definition, random is something that happens haphazardly! You can compare it with the UFO seeing. Most UFO reports come from random places at random times. And the spotting of UFO only happens sometimes (not always, not every where)! Happy Testing...

    ReplyDelete
  7. this is the first time I m visiting this forum, its coool. Thanks for creating such a nice forum.

    ReplyDelete
  8. Unfortunately, while this is interesting advice, it continues to propagate poor practice. It is far better to take control of management and take control of testing. I don't allow management to put my back up against the wall like this. You have the power to define how you wish to work and the overall process. Most don't realize this power because they get stuck in a certain mindset. Don't let yourself be pushed into an impossible situation that forces you to compromise your principles. Learn to create a service level agreement with the entire project staff and redefine the processes to avoid being treated like a second class citizen.

    ReplyDelete
  9. @ Anonymous,

    Could you help me in understanding this?

    It is far better to take control of management and take control of testing.

    Seriously, as a tester do you really think that you have the control over the project? From all I know, I think it is the project stakeholders (and they DON'T include the testers) who have this control. While it would be good to have such control, in reality to expect it, is mere day-dreaming because it's the project stakeholders who fund and hence run the project; not the testers. If someone thinks that the testers (should) have such control then I'm afraid (s)he needs to take a step back probably and recognize the roles of a tester first!

    Moreover, in real world "tight deadlines" are as common situations (NOT impossible situations) as the chance of getting flu in Spring/Winter (that depends on in which part of the globe you are)! If still someone thinks/feels that such tight deadlines (which can happen due to many reasons that are beyond the control of even the stake holders) are stupid and a tester should rebel against it, then either that tester is "too naive" or "too lucky to have not faced such situation yet"!

    P.S. If someone is treating you (the tester) as a "second class citizen" (whatever that means) then I feel it is YOUR mistake to let them treat you like this; not theirs!

    P.P.S. I would have appreciated more if you had left your name on the comment!

    ReplyDelete
  10. Superb article. Infact this is a common question asked in most of the Test Lead interviews.

    ReplyDelete
  11. Hi, excellent work, nice written article. thanks for sharing.
    Mass Profit Formula Review

    ReplyDelete
  12. hi
    iam new to testing..my questions is everyone says analyse the risk ares...identify the risk areas.

    how do we think as a complete non programing guy other than system crash to know thats the risk siyuation...please specify some of them....please

    ReplyDelete
  13. Thanks very much for this wonderful post. For a test lead with just 1 week to do a full functional test of integrated items (dev time was 5 mnths ;)), this post is heaven.

    ReplyDelete
  14. Having a checklist is OK, but it seems to be quite a chaotic solution for testing purposes.

    Whatever projects I have worked on, there is seemingly 4 fundamental areas of testing employed:
    1) System testing - where each individual component of the system is tested independently. This is high level testing looking for any errors in coding, etc. This is usually conducted by the development team due to their understanding of the code. (Yes, I know, the first rule of testing is to keep the developers out of it as much as possible; no developer likes to break their baby)
    2) Product Integration Testing - where the individual components are "fitted" together and some basic end-to-end testing is completed. It's actually quite surprising how often this phase is skipped, leading to major problems further down the line.
    3) Business Acceptance Testing - does the system comply with the business design and market regulations? You would hope the system specification matched that of the market design or business solution, but there always seems to be mistranslation between the business analysts and the technical team. This phase of testing for me is the critical stage, and goes into the lowest level of detail. I'd usually advise conducting this testing in cycles - at the end of each cycle the defect fixes are implemented and testing is repeated. I would hope after 3 cycles of testing, all major defects will have been fixed. I'd also look to complete an impact assessment after the 3 cycles taking into consideration any changes which have been made in the last cycle and conduct any minor testing if necessary.
    4) User Acceptance Testing - this phase of testing should really be about the graphical interface. This is where the business users are let loose on the product and can offer any comments about whether the interface can be improved to accommodate for their needs. Anything which arises in this phase of testing *should* - often isn't - be minor.

    Ultimately, 80% of testing is in the preparation:
    - Define your scope of testing
    - Create test sets
    - Define your expected results which are then compared to the actual results
    - Make sure your data is sufficient for use!

    Testing is, in my view, the most critical part of a project, but that doesn't mean it has to be unnecessarily complicated.

    ReplyDelete
  15. Thanks for sharing the good article...

    ReplyDelete

NOTE: Comments posted on Software Testing Tricks are moderated and will be approved only if they are on-topic. Please avoid comments with spammy URLs. Having trouble leaving comments? Contact Me!