by John Helgeson
Timing is everything. In your personal and business life, it can mean the difference between success and failure. In software, as in all development, work runs on a time schedule. The project moves from one event to the next until it’s complete. Each event could change the way software is being developed. Software uses different development models, and each has its own sequence of events.
To perform a software audit at the right time, it’s critical to understand the software development life cycle (SDLC). There are many ways to describe software life cycles. The most basic is definition, development, and maintenance. In the “definition” stage, the developers write a wish list. In the “development” stage, they refine the wish list and bring the ideas full circle to a completed product. The “maintenance” stage follows and adapts the product for continued usefulness.
Every company has its own model that it adjusts and adapts for each product.
Audits are an integral part of software quality. As an auditor, you need to know the model the organization uses and what phase it’s in.
The waterfall model of development is the most common of all the development models. It also shows how and where an audit fits into the SDLC. The waterfall phases are:
- Idea/functional specification
- Analysis/planning
- Designing
- Coding
- Testing
- Maintenance
Auditing software: Idea/functional specification
An idea becomes a product as soon as it goes into development. Before development, the development team expands the idea and functions or requirements are written, modified, and rewritten. Some requirements are added to the product and others are dropped. Everyone has heard, “Wouldn’t it be nice to do…” The product requirements become an entity of their own during development.
Note that nowhere in the preceding paragraph are the words “system” or “procedure” written. All of the focus is on the product. The only system and procedures used for the development at this point are what the company already has in place. These might only be the high-level general procedures, but there must be procedures.
At this point, the auditor should concentrate on the product with only a few questions about procedures. At the end of this phase, the functional specification or requirements should be at least 80 percent complete. Of course, there will be changes during the rest of the development. Where should you, the auditor, be looking next? Configuration control of the specification should be first thing checked. Is there a formal version control process or is there just a document being passed around?
Is there a log or database of ideas for future versions not in the requirements? Many emergent ideas are lost at this point in the project. If a designer knows what to expect in future releases, he or she can plan for it and avoid redesigns. The functional specifications should be peer reviewed. How many reviews were performed on the specification before it was completed? Were notes and problems listed in the minutes? Were all the right people included in the reviews? Who are the right people? Was this phase rushed or was an adequate amount of time allotted to the best specification? Was any risk noted? If so, was it fully documented?
Auditing software: Analysis/planning
Here is the most important part of the whole project. Spend adequate time auditing the analysis and planning of the project to ensure the best possible outcome. A mistake here could cost tens or hundreds of thousands of dollars later in the project’s life cycle. Many companies fly through this phase without doing all their homework and later will complain that the planning was not complete.
A rough outline should be put together to ensure a thorough understanding of the system. There should be a working group that includes users, operators, system analysts, database programmers, and others who will be affected by the system later on. After the approach has been selected, many plans need to be put together. These include:
- Software development plan
- Supplier plan
- Type of equipment used for development
- Development network plan
- Configuration management plan
- Software quality plan
- Test plan
- Review and update budget
- Installation and updates plan
- Maintenance plan
These plans bring a focus to the development process. At this point, the product is still the main focus, but its prominence begins to ebb as emphasis shifts to the systems that will create it.
As an auditor, you need to review all of these plans during development to see that they are updated. Some plans might begin here and grow during other phases. Ensure that all the subjects have been at least started or defined to be written.
Auditing software: Software design
Designing is not a one-person operation. All interested parties (programmers, testers, users, customers, and maintenance) should be involved. As part of the design review, you as the auditor should find out who was involved and their areas of expertise. This information should be available in the audit notes and management review minutes.
Design doesn’t just happen automatically. It’s the result of many hours of hashing out ideas and details. Hundreds, if not thousands, of decisions need to be made during design. Each decision could affect the outcome of the system. Every variation from the original design could cause a time delay and a rewrite at the end of the project.
If the system has a robust design, problems will be minimal. However the team must weigh the cost of the proposed design process. Cost vs. robustness is a major factor in design.
Traditionally, the requirements will include the planned growth of a new system. If there is no such requirement and the system is transactional, question the auditee about the organization’s growth expectations. You might have a software engineer calculate how much more the system can handle to assist with this effort. Frequently, there is a significant gap between what the system can handle and the customer’s growth projection when the design is completed too quickly.
Interfaces are also often overlooked. The design should show the types of communications to be used, such as Internet, closed system, interprocessors, and suppliers’ software. Use a process map (or diagram) that shows what data will be transferred. An abbreviated list of some of the data with notes might work, or a full data dictionary might be needed. If there is no document to define the fields, no one will remember how the fields were used when the field programmer needs this information after the process is implemented.
Peer reviews are an important part of the design development. All of an organization’s in-house experts should be invited to review the proposed design. Minutes and findings should be recorded. Ask for the peer review minutes and findings, along with documentation of the resolutions.
At this point in the process, the design team should have:
- Identified a development system
- Selected development tools (operating system, compiler, etc.)
- Ordered tools and equipment
- Established personnel requirements
- Updated its budget
- Prepared a list of parts that will be supplied by outside suppliers
- Prepared a list of suppliers
Auditing software: Coding (and unit testing)
At last, the designer can hand off the system design to the organization’s programmers. Depending on how detailed the design, the programmers might need to design the individual programs.
If there are strict controls, the programmers will need to design the flow of the program or its interactions with other programs or databases. If there are any communications between program modules or databases, they must be defined. The programmers must understand the coding standards the department requires to write the code.
When auditing this phase, you will need to see the department-specific process for design to verify that all programmers are working together. Are the programmers talking to the database designer or programmers? Are interactive modules planned by the team? Unless there is cooperation and communication, they are courting disaster at integration.
Many peer reviews can take place during coding, such as:
- Design reviews
- Code reviews
- Interface or communication reviews
- Unit test plans reviews
Do the programmers have a unit test plan that addresses all the variables for the routine or do they wing it? An error in coding at this point can be hard to find during integration testing. If the routine is handling an exception (i.e., handling a communication error), the problem might not surface until it’s in the field. If there is a communications problem, does everyone involved work together to find a solution?
At this time any open items from the other phases leading up to coding must be checked and closed.
Auditing software: Test (system integration)
This phase should be started before the coding phase has ended. In waterfall-style development, the testing phase begins only when the coding phase ends. This is called “throwing the code over the wall” and it creates a unsupported testing environment. Overlapping these processes makes them more productive and efficient. Testers can take problems back to the programmers, who might still be working on the same code.
Integration needs to be planned. This is where all the parts come together to form a system. Poor planning costs time and blows the budget. It doesn’t matter if each program will run independently it’s what happens when they come together. They must be able to communicate with each other.
As the auditor, you want to see the plan and the resources to be used. A test plan should be started each time a fix is made to the system. When all the testing is completed, will the testers run the whole test with an independent observer for one last look at the software? A second set of eyes should run the test from the beginning to look for anything that might have been overlooked.
Finally, some waterfall models show customer acceptance testing at this point.
Auditing software: Maintenance (production and deployment)
Maintenance for after the project is completed must be planned for. All of the installation and update material should be in place. How will the company receive and handle problems received from users of its product? Will the customer need to buy updates or upgrades? Is the company going to sell maintenance agreements?
To audit maintenance, an auditor should look at how customers are being supported. How are they reporting problems with the system? Is there a help desk that customers can call? If an auditee has a help desk, it would be a good idea to test it by calling with a problem. There are customers who won’t buy a software product from a company because the help (or call) representative doesn’t offer effective assistance or is surly. This is something I will never understand. Why spend upward of 50 percent of the sales price to market the product and allow a person making $10 an hour to lose the next sale? I report it when I find it. Review tapes of service calls, if possible.
If the system is completely custom (a one-time sale), its maintenance should have been covered in the requirements. Who is responsible for what? How does the customer request service? What kind of data does a customer need to include? Does the customer have a direct line to development? How fast does the development company respond? How long before someone is sent to the customer’s site? Is there a phone or Internet connection the development company can use to fix the problem? How long is the development going to repair problems: five years, ten years, or until the system is replaced? In many cases maintenance is not discussed during the sale. But each side will have its own ideas about how things will be handled. If their ideas don’t match, what do they do?
About the author
John Helgeson has more than 35 years in the computer industry in lead or management positions. He spent more than 25 years in system design and development and the last 10 years in software quality. He is the auditor of The Software Audit Guide (ASQ Quality Press, 2009), a handbook for auditing software.
Tags: auditing software development.