Electrical Transmission and Distribution--Supervisory Control and Data Acquisition (SCADA) (part 2)

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

<< cont. from part 1

4. SUPERVISORY CONTROL AND DATA ACQUISITION

4.1 Introduction

The term Supervisory Control and Data Acquisition (SCADA) refers to the network of computer processors that provide control and monitoring of a remote mechanical or electrical operation (e.g. management of a power distribution grid or the control of mechanical processes in a manufacturing plant). Typically in the past, a SCADA-based system would encompass the computers and network links that manage the remote operation via a set of field located programmable logic controllers (PLCs) and remote telemetry units (RTUs). The PLCs or RTUs would be connected to field transmitters and actuators and would convert analogue field data into digital form for transmission over the network.

In many instances today, with respect to high voltage substations or even medium voltage substations, the substation control devices are not necessarily RTUs or PLCs, but IEDs (intelligent electronic devices) that serve the purpose of protection, local and remote control. These IEDs provide a means to acquire and transmit the analogue and binary input data to the control sys tem via communication links.

SCADA systems are in essence a real-time operating database that represent both the current and past values or status of the field input/output points (tags) used to monitor and control the operation.

Relationships can be set up within the database to enable functional (or computed) elements to be represented which provide operators with a logical representation of the remote operation. This representation enables the whole operation to be monitored and controlled through a central point of command whereby concise information is available in a clear schematic and textual form typically on graphics workstations.

The supervisory functions of SCADA systems present plant operators with a representation of the current and historical states by means of hierarchical graphic schematics, event logs and summaries. These screens also identify all abnormal conditions and equipment failures which require opera tor acknowledgement and remedial actions. The control functions enable specified items of plant to be controlled by issuing direct commands, by instigating predetermined control sequences or by automatically making a programmed response to a particular event or status change.

SCADA systems do not usually handle the collation of statistical data for management information purposes. However, a SCADA system usually exists in an integrated computer hierarchy of control and as such interfaces usually exist to other computer-based systems.

4.2 Typical Characteristics

It is convenient to describe SCADA systems by considering their typical characteristics in relation to input/output, modes of control and interfaces with operating personnel.

4.2.1 Plant Input/Output

Typically a SCADA system interfaces with plant over a wide geographical area via PLCs and other RTU/bay control equipment local to the plant. The number and types of the input/output points depends upon the nature of the local equipment connected. There are two basic modes of capture of input data which may be used by the central processing facility of the SCADA.

These are:

(i) scheduled capture, whereby the local units are polled on a regular basis and all input data are transferred; or

(ii) change of state capture, whereby only input data which have changed is transferred.

The input and output data are held centrally in a real time database. By holding historical as well as current data it is possible to provide facilities for analysis and reporting of trends. This facility is often particularly important for systems where most of the data are analogue rather than discrete.

In the database input and output data are usually grouped into functional units or elements. For example several input/output points might be grouped to provide the complete representation of an electrical circuit breaker.

Frequently such groups of plant input points are transformed to calculate computed points which are also stored in the database. For example a single computed point might represent the status of a number of associated circuit breakers.

A typical use of such computed points is in the management of alarms.

In many cases alarms are categorized at least into major and minor alarms.

Each alarm is itself likely to be a computed point usually computed from an input value and a trip level. Some major alarms may also be computed from combinations of minor alarms. Complex strategies for predicting alarm conditions may also be used.

4.2.2 Control Modes

Control of plant associated with a SCADA system may be either local or remote. Local control may be exercised automatically, for example by a local PLC, or may be by local mechanical or electrical controls (automatically or manually operated).

Remote control via the SCADA may be instigated by an operator or maybe automatic. Automatic controls can be initiated by time (scheduled control) or events (change of state control). In both cases control frequently involves initiating a pre-programmed sequence of actions which are then automatically carried out.

One advantage of distributed control with the control systems located locally in substations and reporting to a higher level SCADA system is the lower dependence on troublesome communication links. In modern computer-controlled stations, the SCADA system sends out a simple remote control command. Any sequential control sequence is executed locally.

4.2.3 Operator Interface

The operator interface for a modern SCADA system should be designed to provide the maximum support to the operator in his role of monitoring and controlling the plant. In order to achieve this considerable use is made of sophisticated real time graphics to display current and retrospective input output values and trends.

A well-designed operator interface can provide considerable support in alarm management. Where there is a potential for large number of alarms it is particularly important that they are grouped, classified, and displayed in a coherent fashion which enables the operator to concentrate on the more important alarms. Often the facility to filter out minor or consequential alarms or to acknowledge them in groups for later response can be valuable on its own.

Nowadays, graphical displays are usually Windows based. There is an increasing trend to using a mouse or a touch sensitive screen to supplement a keyboard for most operator input. Graphical displays should incorporate a hierarchy of displays from high level overall plant schematics to tables of associated input/output points at the lowest levels. Banner display of important information about key events is often used. In order to supply all this functionality it is common to use multiple screens at single operator positions.

An important security facility which is required in many SCADA systems is the definition of different classes of user with access to different functions or facilities. The distinction may simply be between supervisor and operator or between supervisor terminal and operator terminal or may involve several levels of access. Most systems provide the facility to set up or configure the SCADA database. This is usually necessary for the installation and commissioning of the system but should not be available to the ordinary operator.

Overall control centre or room design must take into account the following:

_ The suitability of the structure to withstand possible major hazard events.

_ The ability of the layout and panel arrangement, VDUs, etc. to ensure effective ergonomic operation of the plant under both normal and emergency situations.

In particular, the control room and its operators should be considered as a "whole system" and not in isolation to one another. Control room designers and operators should be able to demonstrate that appropriate human factor considerations have been taken into account in the design. Advice concerning:

_ layout;

_ maintenance;

_ thermal room environment;

_ visual environment (e.g., desktop lighting levels to be in the order of 500 lx);

_ auditory environment (e.g. background noise to be 85 dBA);

_ man-machine interfaces;

_ alarms;

_ coding techniques;

_ display design (including text, labels, display devices, and graphics);

_ controls; and

_ anthropometry (reach, seating, and posture, etc.) together with applicable International Codes of Practice is given in the References at the end of this section.

4.3 Design Issues

The throughput of the system is one of three main issues in any design. This will be dictated by the following:

(i) the magnitude of the field input/outputs;

(ii) the data capture time required, which is usually dictated by the time constraints of the process being monitored and/or the time taken to respond to an event;

(iii) Whether any sophisticated schemes are used for data compression;

(iv) whether a deterministic communications protocol is required guaranteeing a response in a specified time;

(v) what level of integrity is expected of the data communications;

(vi) what physical media for data transmission is acceptable in the particular application.

A second issue is redundancy; certain key plant input/output and associated operation may be deemed as being high integrity. For such input/output, redundancy needs to be considered in one or more areas of an SCADA system to minimize the effects of failure. Typical redundancy may include:

_ dual links for plant input/output to two or more PLC or RTUs;

_ dual communication links handling dialogues with the main supervisory processors (MSPs);

_ redundancy in the main supervisory processors provided by either employing a standby fault tolerant processor or by having two or more processors providing a shadowing function. In this case a 'standby' processor would shadow all operations of a 'normal' processor and watchdog mechanisms would enable a switchover to occur if any communication failure or data integrity errors are detected.

The size of the system in terms of the number of input/output points is a third issue.

As the loading in terms of the number of plant input/output points increase the processing power required increases. The central architecture of a SCADA system may require several processors each dedicated to specific operations. A typical partitioning would include:

_ Front End Processors (FEPs). FEPs are dedicated to handling data acquisition from field RTU and PLC equipment.

_ Graphics workstations. Where a number of operator positions are required a distributed client-server-based architecture, spreading the load between a main supervisory processor and two or more graphics workstations should be provided.

_ Main Supervisory Processor (MSP). One or more MSPs provide centralize control and representation of the field input/output (plant status) by means of one more databases. The central processor will perform functions such as data logging, handling of control sequences, maintenance of logical (functional) equipment states.

4.4 Example (Channel Tunnel)

The Channel Tunnel Engineering Management System (EMS) employs a SCADA system configured to manage remote equipment via 26,000 direct input/output points and a further 7,000 computed points. The equipment under the EMS control is the Fixed Equipment located in the two terminals in Folkestone in the UK and Coquelles in France and in the three tunnels (Running Tunnel-North, Running Tunnel-South, and the Service Tunnel).

The Fixed Equipment manages the following:

_ Electrical Distribution:

_ Connections to National Grids (225 kV and 132 kV).

_ Supply to 25 kV Overhead Catenary System.

_ Tunnel distribution of 21 kV and 3.3 kV supplies.

_ Terminal and Tunnel Lighting.

_ Mechanical Systems:

_ Normal and Supplementary ventilation systems.

_ Tunnel Cooling.

_ Pumping.

_ Fire Fighting equipment.

FIG. 12 shows the RTU equipment located in the 178 equipment rooms located between the service and running tunnels. These handle the data acquisition and control of the 26,000 input/output points via 600 PLCs.

When an input/output point changes state, the new status is sent to both the French and UK control centers using a drop insert connection to the RTUs.

The input/output states are handled simultaneously by main EMS processors (MEPs) in both UK and French control centers. The MEPs are DEC VAX processors running identical SCADA application software. The machines operate in a normal/standby mode. The normal machine is the master and handles all operator dialogues from both the UK and French operator positions. The standby processor whilst maintaining data compatibility with the normal processor also monitors the health of the normal processor, site net works and through tunnel point-to-point links. If any failures are detected then a switchover will occur and the standby machine will move to a normal status.


FIG. 12 Arrangement of RTUs in Channel Tunnel SCADA system.

Three dedicated FEPs are provided in each terminal. Two of these FEPs handle communications with RTUs. The other four FEPs (two in each terminal) provide dual redundant links to a number of external systems such as fire detection and access control.

Data integrity is provided as follows:

_ Certain plant input/output has links to two different RTU processors.

_ All RTUs communicate with both the French and UK control centers. In addition input/output states received in the French control centre are routed to UK control centre by the through tunnel links. Similarly UK control centre transmits input/output states to the French control centre.

In a full availability operation each MEP receives two identical messages which are filtered accordingly.

_ Redundant on site networks.

Dedicated operator servers (OPS) provide five operator positions in the UK and four in France. In normal operation these provide for a supervisory position and two or more operating positions. The UK control centre also has a major incident control centre (MICC) with a dedicated OPS.

EMS operations are possible simultaneously in both the UK and French control centers. However, only one control centre can have an active status which determines the nature of the possible operation.

5 SOFTWARE MANAGEMENT

PLCs, power distribution systems and SCADA systems all make use of software. In many cases, the software components can be seen as the main contributors to the systems' functionality. This use of software has many advantages but it also poses many problems which need to be addressed carefully if they are not to threaten project success.

5.1 Software - A Special Case

The use of software in control systems offers the engineer increased flexibility in the design and operation of systems. Often software allows a system to provide functionality which could not otherwise be provided in a cost effective way. However, software development projects are renowned for being late, over budget and not meeting the requirements of the customer. The key to understanding why software development projects frequently possess these unfortunate characteristics is to look at how software development differs from other branches of engineering.

The problems presented by software are many and somewhat fundamental in their nature. The still maturing discipline of software engineering attempts to address these problems.

5.1.1 Software Is Complex

Software is a highly complex dynamic object, with even a simple program having a large number of possible behavior patterns. For most non-trivial software it is impossible to exhaustively test its behavior [1] or prove that it will always behave as its specification requires. The difficulty of proving that a software system meets its specification is compounded by the lack of fundamental laws that can be applied to software. The mathematics underlying software engineering is still in its infancy compared with other branches of engineering.

5.1.2 Software Is Discontinuous

The discontinuous nature of software means that small changes in input values can result in large unexpected changes in the software and system behavior. Small changes in the software itself can have similar results. As a result meaningful testing is much less straightforward than for analogue systems. Testing of a completed software system does have a place in providing confidence that it performs its functions correctly but more is required.

Considerable effort needs to be expended on managing and assessing the software development process.

5.1.3 Software Changes Present Difficulties

The range of functions a software system can perform and the apparent ease with which new software can be added makes software very attractive to engineers. However this is deceptive; once software is built it is difficult to change with confidence. Even minor changes can have dramatic and unforeseen effects on often unrelated parts of a system. Furthermore, as more changes are made the software architecture will tend to become increasingly complex and fragmented. Changes become increasingly difficult to implement satisfactorily. This fact should be borne in mind when requesting modifications to completed systems.

5.1.4 Software Is Insubstantial

The intangible nature of software means that you can not see, touch or feel software. As a result a software system is very difficult to appreciate until the very end of a development when the component parts are integrated.

Unfortunately by this stage a high proportion of project resources will have been expended making any corrective actions expensive to say the least.

Furthermore testing at this stage is only of limited use in providing confidence in the software.

5.1.5 Software Requirements are Often Unclear

Software systems usually perform a very large number of diverse functions which can interact with each other in complex and subtle ways. It is very difficult for a customer to describe these functions precisely and this leads to unclear and changing requirements. This problem is made worse by the culture gap that frequently exists between customers and software developers.

In other branches of engineering the specifier of a product will usually be experienced in the engineering discipline required to build that product. This situation rarely exists with software systems. As a result software systems are often specified in narrative English because the notations of software engineering are unfamiliar to the customer. The use of English (or other natural languages) can lead to ambiguities and inconsistencies in the specification which are then fed into the development process and only discovered late in the project when they are difficult and costly to correct.

5.2 Software Lifecycle

At the highest level a software development project should be managed in the same manner as any other engineering project. Thus a software development should follow a software project lifecycle similar to that shown in Fig. 1. Such a life cycle has clearly defined phases, with each phase having defined inputs and outputs. The project should have recognized review points to aid control. Normally review points would occur at least at the end of each phase. The whole software and system development process should take place within a quality assurance system such as the ISO 9000 series. The lifecycle shown in Fig. 1 and described below is a lifecycle for software development, which should be integrated into the overall project lifecycle.

5.2.1 Requirements Specification Phase

The objective of the requirements specification phase is to produce a clear, complete, unambiguous, non-contradictory description of what a system is to do. The requirements specification should be fully understandable to both the customers and the developers. There may be a separate software requirements specification, but if not, the software requirements should be clearly separated and identifiable within the overall requirements specification.

Errors at the requirements specification phase can have very serious consequences and therefore the developers should make a major effort to confirm its correctness.

When the requirements specification has been agreed, a requirements test specification (often called an acceptance test specification) should be drawn up. This document should state those tests a system must pass for it to be acceptable to the customer. Should a system fail any of the acceptance tests the customer has the right for the problem to be fixed and re-tests performed.

However, these tests cannot on their own ensure that the software is correct.

5.2.2 Software Design Phase

Using the requirements specification the developers will begin designing the software. As with any engineering discipline this is an essentially creative process which can be done in many different ways.

The objective of the software design phase is to decompose the software into a coherent set of self-contained modules which will each have their own specification and which can each be tested separately. The software design phase will often see the software development process disappear into a tunnel as far as the customer is concerned. Some time later a fully working sys tem will emerge from the other side at the software validation phase. The work carried out within this tunnel is vitally important and it is well worth the customer understanding and monitoring what occurs.

A structured top down approach should be taken to this high level design of the software, producing a hierarchy of modules at different levels. A variety of techniques, often supported by automated tools, may be used during the design. Typical techniques include data flow diagrams, state transition diagrams, object-oriented notations and entity relationship diagrams. Any of these techniques should be supplemented by English language descriptions.

5.2.3 Software Module Design Phase

The objective of the software module design phase is to perform the detailed design of exactly how each module will carry out its required task. The means by which this detailed design is expressed will vary depending on the type of system being developed and the tools used by the supplier. Typical approaches are to use logic diagrams, flow charts, pseudo code (programming language like statements), formal mathematical notation or decision tables. Alternatively the techniques used during the software design phase may continue to be used. Often a combination of such methods, supplemented by English language description is best. It is essential that the required inputs and outputs, their meaning and possible values are clearly identified for each module.

During the detailed design of a module the developer should produce a test specification detailing those tests that need to be carried out to confirm the correct functions of a module once coded.

5.2.4 Code Phase

The objective of the code phase is to transform the software design specification and software module design specifications into a coherent computer program or programs.

It is important to ensure that the code produced is understandable to per sons other than the author. In order to achieve this, project standards should be set up and adhered to code structures, format and commenting. The code produced should also be reviewed and changes to approved code are strictly controlled.

In principle the programming language or languages to be used could be selected at this stage. In practice it is likely that design constraints considered earlier will already have determined the language. Such considerations might be dictated by availability, experience or processor used in addition to the merits of a particular language. If possible a high level, structured language should always be preferred to using assembler.

5.2.5 Software Testing Phase

The software testing phase covers the testing of the software from individual modules to the complete software system. The phase therefore involves much more than testing against the acceptance test specification. The objective of the phase is to ensure that the software functions correctly, in so far as this can be achieved by testing. In order to satisfactorily test the individual modules it is likely to be necessary for much of this testing to take place in parallel with the coding phase, though conceptually it occurs afterwards.

Records should be kept of the testing of each individual module, of each group of modules as software integration proceeds and of the complete integrated software. These records should be considered part of the documentation of the software and should be retained either by the supplier or the customer. The customer should ensure that the testing process is monitored either directly or by a third party.

5.2.6 Software/Hardware Integration Phase

The objective of the software/hardware integration phase is to combine the software and hardware into a coherent whole. The integration process involves further testing of the software and system, with further changes being made to the software to resolve any problems which arise. Frequently part or all of this phase must take place at the customer's site.

It is essential that the activities of this phase particularly software changes and their testing are adequately controlled and recorded.

5.2.7 Software Validation Phase

The software validation phase occurs when the software is complete.

The objective of the phase is to ensure that the completed software com plies with the software requirements specification. A variety of methods may be used including software and system testing and various levels of review of the software and system documentation.

The relationship between software validation and acceptance testing may vary depending on the type of project, the function of the software and the customer requirements. In some cases software validation is required as part of the acceptance testing before software installation on site. In other cases validation may be required after all commissioning adjustments have been implemented.

5.2.8 Software Maintenance Phase

Software maintenance differs from other maintenance activities in that it necessarily involves modifications to the software. These modifications may correct errors in the software, add facilities which should have been included originally or add new facilities. Often software maintenance involves upgrading to a new operating system and modifying existing software so that it works within the environment.

Because software maintenance always involves new changes to the soft ware it requires careful control and regulation. For example the benefits or otherwise of each proposed change should be carefully considered and analyzed before the change is authorized.

5.3 Software Implementation Practice

The process of software development described in Section 5.2 provides a theoretical framework for the activities which an engineer can expect to see taking place. The concept of a software lifecycle and its associated documentation are well understood and accepted but interpretations vary. In particular, phase and document titles may not match those presented in Section 5.2. Nevertheless it should be possible to identify all the key features in any software development process (see Section 5.3.1).

In practice there are a variety of tools, methods and techniques which suppliers can and should use during the software lifecycle. There can be clear rules about which should be used and the choice may well affect the lifecycle and documentation set associated with the software. The important point is that none of the tools, methods or techniques is sufficient on its own.

They should be used as part of an approach based on a coherent justifiable lifecycle and associated with comprehensive documentation and software project management techniques (see Section 5.4).

5.3.1 Key Lifecycle Features

The key features which should be evident in any software lifecycle are:

_ a clear specification of the software identifies those requirements separately from the system requirements and separately from the software design;

_ a software design which is recorded and goes through two or more stages of increasing detail before coding;

_ software testing which is clearly specified, and which covers each stage of the code being developed, integrated and installed;

_ final validation of the completed code against the requirements;

_ control of changes to the complete software; and

_ formal design and quality reviews at appropriate points in the life cycle.

5.3.2 Software Safety and Reliability

The safety aspects of systems containing software are often not appreciated by engineers. Software can often provide the potential for enhanced safety through enhanced functionality. However, the characteristics of software are such that special care is needed where it is to be used as part of a system which has the potential to harm or is otherwise required to be of high integrity. It is beyond the scope of this section to provide any detailed guidance on the issue, but TABLE 5 lists a few of the emerging draft and final standards and guidelines which apply. The Institution of Electrical Engineers (now renamed The Institution of Engineering and Technology) produced an excellent professional brief on the subject of safety-related systems which has been updated by guidance from the Hazards Forum.

===

TABLE 5 Software Safety Standards

===

5.3.3 Analysis and Design Methods and Tools

Various tools are available to assist with software specification, design, implementation and test. Different methods address different aspects of the software lifecycle and use different approaches. All provide at least some of the framework on which to base a software lifecycle.

Computer aided software engineering (CASE) tools and the methods on which they are based are frequently used as the foundation on which a software development project is planned. Such tools typically assist with specification, design and implementation and provide much of the necessary life cycle documentation for those phases. The provision of such documentation by an automated tool helps ensure that it is consistent and follows a coherent format. Most importantly traceability of requirements, through to the final design and code, is also ensured. In many cases the methods and tools make extensive use of diagrams which helps make the designs understandable.

Formal methods provide a mathematically based approach to software specification and design. The principal attraction of such methods is that they allow a proof that the mathematical specification is internally consistent and that the completed code correctly implements the specification. At the time of writing these methods are not widely used and the necessary skills for their use are in short supply. In the future the use of such methods can be expected to increase, particularly for high integrity applications.

A proprietary code management tool, to control build configurations should be adopted by system developers. Such tools assist in providing librarian facilities in a multi-developer environment and ensure that all software modifications are recorded and incorporated into new system builds. The tools also provide configuration control facilities by version stamping individual files and enabling current and historic versions of a soft ware system to be recovered and rebuilt.

Other methods and tools are available which are not so readily categorized. Static analysis can be used to analyze the software code and generate metrics which express various characteristics of the code as numbers.

Combinations of these metrics can be used to help form a judgment about the quality of the code and its structure. Dynamic analysis can be used to exercise the code and collect data about its behavior in use.

5.3.4 Configuration Management

Configuration management of software systems should be applied during the development and operational life of the software in order to control any changes required and to maintain the software in a known state.

To achieve configuration management the components of a software sys tem are partitioned to form configuration items. These encompass all design and test documentation as well as the constituent software components.

The concept of a baseline is applied to software once the build is in a known state, usually once the software integration phase in the development lifecycle is reached. Thereafter any changes required, resulting from anomalies or functional modifications, are controlled through a predefined change control process. The basic stages of the change control process are:

_ identification of need for change;

_ identify change implementation, assess impact and approve (or reject) implementation;

_ audit change implementation;

_ install modified software and update the baseline.

5.4 Software Project Management

This section sets of out areas in which the problems and techniques of soft ware-based projects differ from those in more traditional manufacturing projects. None the less the basic issues of project management remain valid and to achieve success the following areas need to be addressed:

_ definition of work scope;

_ risk incurred;

_ resources required; and

_ tasks and phases to be accomplished.

5.4.1 Planning and Estimating

In common with any project, planning and estimation attempts to quantify what resource is required. Typically this is measured in man-months effort,

the chronological duration, and task breakdown and other areas affecting cost. The complexity of software requirements and the difficulty of correctly defining them make resource requirements difficult to estimate. Over recent years a number of estimation techniques have evolved which attempt to quantify the likely costs and durations. The basis for the different techniques, in all cases, is based on past experiences and the function sizing of the whole computer-based system.

Each estimation technique has a number of common attributes:

_ project scope;

_ software metrics (measurements) forming the basis on which the estimates are to be made;

_ functional and task decomposition allowing estimation of individual items.

There are two basic categories of estimation techniques, size-oriented and function oriented. An example of a size-oriented technique is the constructive cost model (COCOMO). This computes development effort as a function of program size and produces development effort (cost) and duration.

In contrast function-oriented techniques typically refer to a function point analysis and consider effort associated with the number of user inputs and outputs, enquiries, files and interfaces. Once calculated function points are used to derive productivity, quality and cost measurements.

5.4.2 Scheduling

In a small software development, a single software engineer may well analyze design, code, test and subsequently install a system. However, as project size and complexity increases more engineers must become involved. In a multi person project team there is a time overhead incurred in communication between team members. In addition when team members join project teams in an attempt to make up lost time, they need to learn the system, most likely from those already working on the project. In summary, as project size and complexity increase then the engineering effort required for implementation increases exponentially. If project development slips (or requires accelerating) adding new effort will typically increase the magnitude of any slippage (at least in the short term).

The basic issue to be considered is that people's working relationships and structures are essential for project success, but need careful structuring and management.

5.4.3 Effort Distribution

All software estimation techniques lead to estimates of project duration in effort (typically man months). These assume an effort distribution across the development lifecycle of 40-20-40 (see Fig. 1). The 40-20-40 distribution puts the emphasis on the front-end analysis and design tasks and back-end testing.

5.4.4 Progress Monitoring

The insubstantial nature of software makes progress very difficult to mea sure. A lot of resource is often required to complete a project which is reported as nearly complete. Typical figures quoted are 50% of resource to complete the last 10% of the project.

By partitioning and reporting on software development activities down to a low level realistic measurement of progress becomes more practical.

Because each basic task is small and self-contained it is relatively straight forward to identify whether it has been completed and thus estimate the progress which has been made.



Top of Page

PREV. | Next | Similar Articles | HOME