- Phase 4: Production and QA
- Establishing Guidelines
- Setting File Structure
- Slicing and Optimization
- Creating HTML Templates and Pages
- Implementing Light Scripting
- Populating Pages
- Integrating Backend Development
- Understanding Quality Assurance Testing
- Creating a QA Plan
- Prioritizing and Fixing Bugs
- Conducting a Final Check
- Phase 4 Summary
Creating a QA Plan
You have known since the beginning of your redesign project that you would need to QA your site and that you would need a plan for it. Chances are, however, the extent of your QA plan is a budgetary/scheduling line that looks something like this: QA = 12 hours. Or 5 hours. Or 20+ hours. That budgetary line depends on the scope of your project, client expectations, and the expertise of your team.
Reassess your QA plan. Keep in mind that complicated frame sets, intricate HTML templates, light scripting, and links all need to be QA tested. There are essentially three levels of QA: light/informal, semiformal, and formal. Make the decision as to the level of QA your project requires.
Quality assurance testing can employ several procedures, most of which typically are used both in software development and for testing websites and web applications. In all testing situations, the extent of testing varies widely depending on technical complexity and the detail of the test plan.
A core plan for running quality assurance shows resources, time allotted, the extent of QA expectations, who is involved, criteria for acceptance, and what the development team and the client are each responsible for prior to site launch. Running QA should involve, at the very least, two complete run-throughs: first to generate a comprehensive bug list and second to go back over that bug list and make certain that the cited bugs have been fixed. For informal QA, this basic plan should suffice. For semiformal and formal plans, this core plan is expanded on accordingly.
Test Usability During QA
QA testing and usability testing are similar in approach and scope but different in expertise and goal. At times, however, the two overlap, especially when technical errors and complications (checked for during QA) affect a user's ability to move successfully through a site (checked for through usability testing). In fact, usability testing can sometimes be considered a type of QA.
While you are QA testing your site for errors, technical glitches, and cross-browser compatibility, we strongly suggest you also conduct one-on-one usability testing (also called "verification testing" at this stage). Why? To ensure that your site works from the user's point of view. Naming and labeling must be clear. Navigation must be intuitive and easy to follow. Your site might be clean and free from bugs, but if it isn't easy to use, the chances are it won't get used and will fail.
Conversely, the redesign might be easy to use (congratulations!), but if you have broken links and spelling errors, users won't get very far. Moreover, they will have a poor impression of the site and the company. Make a bug-free and user-friendly site your prelaunch goal. We recommend both QA and usability testing prior to launch. For more on usability testing, see Chapter 8.
Every test plan or testing situation will contain different criteria for acceptance. Each site will need to check functionality against requirements and across browsers, platforms, and operating systems, from simple pop-up windows and submission of forms to complex login procedures and e-commerce ordering systems. As the web continues to evolve from basic HTML to a functional, application- driven environment, more and more attention needs to be allocated to ensuring integration success.
QA & Servers
Prior to a site going live, the production team should test on both the staging/development server and then again when the site is moved over to the actual server environment where eventually it will be live. When the site is moved over, the testing environment needs to be exactly the same as the live environment. This means that the folders, file structure, and server-side scripts must be correctly in place; otherwise, many of the scripts and CGI elements may not work properly.
The Problem With Frames
If your site contains frames, expect QA to take at least twice as long. Nested frames? Even longer. As a rule, the more frames you have, the more QA is needed. Moreover, frames thwart search engines (see Phase 5: Launch and Beyond). Frames, while appropriate and good for some situations (for example, portfolios, maintaining several levels of navigation, and so on), are so problematic that most often they are simply not worth it. We recommend no frames unless absolutely necessary.
Informal testing is very basic and is doable by the development team. Formal usually entails hiring an outside, trained team. Semiformal is, logically, in between the two. Most sites with a development budget under $30,000 can usually get away with informal testing. Sites with complex functionality and an application layer normally include formal or at least semiformal testing in their workflow.
Include the Client
For informal testing, clients should also participate in the QA process in the same fashion as the team members: checking the site and submitting a sheaf of printouts with errors clearly indicated as well as browser and platform types noted. For any level of testing, the client should proof the content. Only the client will be able to truly know if content is in the wrong place or is incorrect. The client should be treated as (and should hopefully act as) a partner and not a finger pointer.
For informal QA processes, the QA lead or the project manager coordinates and tracks all planned tests and assigns team members to sections of the site, individual browsers, browser versions, and platforms. The assigned team member then goes through the site and compiles and lists all bugs for the HTML production team to fix. An easy way of doing this involves printing out pages that have errors and clearly indicating each error on the printout. Note that these printouts are only complete and helpful if the browser and platform is noted on the printout that notes the bug. Without knowing the browser and platform, it is difficult to re-create the error and therefore fix it.
A Core QA Plan
Summary of overall goals for QA including methodology, schedule, and resource allocation.
List of specific browsers, platforms, and operating systems being tested.
List of desired connection speeds being tested.
List of any specific paths or functions that need to be tested.
A plan for bug tracking (using a web-based program or Excel spreadsheet or printouts).
A plan for confirming that fixes have been made prior to launch.
Any stated assumptions (known risks) to protect the team if all fixes cannot be caught prior to launch. These should be listed in the Details and Assumptions section (in Phase 1) of the project plan or contract and be signed off on prior to the final site being delivered or launched.
A plan for fixing bugs that cannot be resolved prior to launch. Who is to handle them, how will any additional costs be identified, and so on.
The project manager also tracks the "bug list," which, in informal testing, is really no more than a stack of printouts with bugs noted. A big red checkmark through the noted bug indicates that it has been addressed, and an accompanying initial indicating "Fixed" or "Deferred" with a date helps track the fixes.
Usually, for small- to medium-size sites (under $30,000 budgets) with very little technical complexity, this informal process is a perfectly adequate method. Informal testing is also referred to as "ad hoc" or "guerilla testing" in that it has no formal test plan or approach. Testers are just "banging" on the site, looking for bugs to slay.
The bank of computers (set up in the testing area) that reflects the target browsers, platforms, and connection speeds of the audience is often called a "test bed." It is difficult to list every combination of browser and platform; at least use the main ones [6.14]. Even testing a smaller, representative group will result in catching many errors on the site. Test beds are common for semiformal and formal QA. Often for informal testing, the various browsers and platforms are not in the same location.
Figure 6.14. A chart like this one will help track all of the platform/browser configurations of the target audience. It can reflect the test bed setup. This sample audience does not include users on 3.0 browsers or UNIX platforms.
If your project requires more than "guerilla testing," yet your budget will not accommodate formal testing with an outside company, the perfect middle ground is semiformal testing. Stepping up from informal to semiformal testing involves more time, expertise, and planning and if possible, the addition of a trained QA lead and a test bed setup. A semiformal test plan should contain a one- to two-page overview that highlights the scope, timing, and goals of the QA testing process.
Planning for formal QA testing requires experience, time, budget, and most of all, attention to detail minute detail. The biggest difference between semiformal and formal QA is the level of test planning, the cost, the generation of documentation, and the degree of expertise.
Bug Tracking Tools
Although you cannot substitute automated software systems for actual QA testing with humans, there are many available tools that can aid in the process. For complete HTML validator testing, links, spelling, load time, and more, try http://www.netmechanic.com. Fees range from $35 to $200 for testing up to 400 HTML pages.
Other online tools? They are plentiful. Try http://www.scrubtheweb.com to help check your META information. http://www.w3.org/People/Raggett/tidy will help you clean up your HTML. For an excellent bug-tracking tool, visit http://www.alumni.caltech.edu/~dank/gnats.html. Want to learn more about bugs? Go to http://www.mozilla.org/bugs. Mozilla itself is handy for QA as well.
Formal QA uses a comprehensive bug-tracking system and a fully trained QA staff (yes, staff) to test requirements and pages against specified browsers and platforms. It includes test plans, tools, use cases, a test bed, and reports. To illustrate the extensiveness of the formal testing process, consider this example of a typical formal QA plan: Identify at least 10 different paths through a site and test each path on three platforms (MAC, WIN, UNIX), with each platform hosting three browsers (IE, Netscape, AOL), with each browser having several versions (3.0 through 6.0, note that Netscape skipped 5.0) all needing to be tested. This example now has approximately 450 different tests (10 x3 x3 x5) for the defined paths. Overwhelming? Yes. Impossible? No. Impossible in an informal setting? Yes. Recommended for large sites with a significant backend engineering and extensive functionality? Absolutely.
Reporting a bug is easy. Reporting bugs in a way that is meaningful, reproducible, detailed, and solution oriented is a challenge. Here's the old, serviceable, good-for-informal-testing way: Print the page out, note the browser/platform, circle the error, fix the bug, and then check the error as fixed (or deferred if the bug can't be fixed yet). Here's another (and maybe better) way: Use some kind of tracking device even an Excel spreadsheet will suffice although you can only have one person working with the file at a time. Whatever your tracking method, make certain to note the following information:
- Browser type/platform type.
- Operating system.
- Description of problem (one line).
- Detailed description.
- URL of page.
- Severity of problem.
- Can the error be reproduced?