Safety Management Systems — General
Section 1. General Informationp.4
2.1.1.1. The Evolution of Safety Management Systems (SMS). As part of the continuous development of civil aviation over the past 100 years there has been steady evolution of safety practices and a corresponding steady improvement in aviation safety.
A. Several years ago there was recognition among the leading authorities in the domain of aviation safety that further improvements in safety practices were required in order to improve the level of safety of civil aviation. Out of this work the concept of Safety Management Systems was developed.
B. The International Civil Aviation Organization (ICAO) was an early promoter of SMS principles and in 2006 they established Standards and Recommended Practices (SARPs) for Safety Management Systems for ICAO Annex 1, 6, 8, 11, 13 and 14. In effect, ICAO mandated the implementation of SMS regulations for all commercial air operators and non-commercial air operators of large aircraft engaged in international air transportation, repair stations, aircraft manufacturers, pilot training organizations, international aerodromes and air traffic service providers.
2.1.1.3. SMS Regulations in the Kingdom of Saudi Arabia. The Kingdom of Saudi Arabia (KSA) published its first set of SMS regulations in March 2009 but there was no systematic implementation of these regulations at that time. With the publication of the General Authority of Civil Aviation Regulations (GACAR) in 2012, new Safety Management System regulations were included in Part 5.
GACAR Part 5 is modeled after the ICAO SMS SARPs and it includes all of the ICAO mandated SMS components and elements. GACAR Part 5 adopts a similar structure and rule language to that used in the SMS regulations of the United States (U.S.), Federal Aviation Administration (FAA).Table 2.1.1.1 provides a complete comparison between the GACAR Part 5, 14 CFR Part 5 and ICAO SMS requirements. GACAR Part 5 is applicable to the following certificate holders:
• All air operators certificated under GACAR Part 119 • All aerodromes (except heliports) certificated under GACAR Part 139 • All pilot schools, flight engineer schools and training centers certificated under GACAR Parts 141, 142 and 143 that operate aircraft • All repair stations certificated under GACAR Part 145 • All air traffic service providers authorized under GACAR Part 171 A. For the most part, the GACAR Part 5 SMS requirements are general requirements in the sense that they do not distinguish between an SMS for an air operator versus an SMS for an aerodrome, for instance. The GACAR Part 5 SMS requirements apply to any certificate holder required to have an SMS. GACAR Part 5 does, however, include an appendices that lists several special SMS requirements that are only applicable to certain kinds of aviation organizations. GACA calls these “sector specific” SMS requirements.
2.1.1.5. The Purpose of this eBook Volume. Volume 2 of this handbook is dedicated to the subject of Safety Management Systems. This Volume provides detailed guidance for aviation safety inspectors (Inspectors) to use when evaluating the SMS that have been implemented by aviation organizations to which the SMS regulations apply. Like GACAR Part 5, for the most part, the SMS guidance in this Volume is general in nature in the sense that it does not distinguish between an SMS for an air operator versus an SMS for an aerodrome, for instance. The SMS guidance applies to any aviation organization required to have an SMS. Like GACAR Part 5, where special SMS guidance is required to address special SMS requirements that are only applicable to certain kinds of aviation organizations, these items are clearly noted in the Volume.
2.1.1.7. The Structure of this Handbook Volume. This volume of the handbook is organized to facilitate ease of use by Inspectors. Chapter 1 includes introductory remarks of the evolution of SMS concepts and regulations. Chapter 2 outlines in detail the SMS framework and it lists the general objectives and expectations for each of the required components, elements and processes that make up an SMS. Chapter 3 explains the phased implementation process for SMS. Phased implementation is only permitted for existing certificate holders under the transitional provisions of GACAR Part 199. Chapter 4 explains in detail the tools used by Inspectors for assessing an SMS to ensure that it complies with the SMS framework and therefore GACAR Part 5 requirements. Chapter 5 explains in detail the processes and tools to be used by Inspectors when formally accepting an SMS. Formal GACA acceptance of an SMS is a regulatory requirement under GACAR Part 5. Chapter 6 presents additional guidance concerning certain key topics associated with SMS. And finally, Chapter 7 includes guidance related to sector specific SMS requirements.
2.1.1.9. Linkages to other eBook Volumes. Although this Volume is dedicated to SMS topics, several other handbook volumes will be needed by Inspectors overseeing compliance with the SMS requirements. Specifically, Volume 12 includes guidance for Inspectors concerning the ongoing surveillance of an aviation organization’s SMS and Volume 13 includes guidance to be used when safety concerns or non-compliances associated with an SMS are identified.
Table 2.1.1.1. SMS Requirements Concordance Table
2.2.1.1. BACKGROUND.p.7
A. The objective of this chapter is to describe, in detail, the framework of a Safety Management System (SMS). The SMS framework is composed of components, elements, and processes, each of which is explained in terms of its functional expectations, or how they would need to be used in order to contribute to an effective SMS. These functional expectations are further defined in terms of performance objectives (what the process needs to do) and design expectations (what needs to be developed) to better align with current system safety and safety oversight models.
B. The SMS framework addresses two important needs: 1) To provide one standard set of concepts, documents, and tools for the development and implementation of SMS that complies with General Authority of Civil Aviation Regulation (GACAR) Part 5.
2) To make the General Authority of Civil Aviation (GACA) documents and tools align with the structure and format of the International Civil Aviation Organization (ICAO) SMS Framework. It should be noted that this SMS framework also aligns exactly with the Federal Aviation Administration (FAA) SMS Framework.
2.2.1.3. SCOPE AND APPLICABILITY.p.7
A. Scope. This Framework provides guidance for SMS development by aviation organizations (for example, commercial air operators, schools and training centers, repair stations, air traffic service providers, and aerodrome operators) and forms the basis for SMS assessments which are described in detail in Chapter 4 of this volume.
B. Applicability. The GACA views the objectives and expectations in this Framework as a minimum for an aviation organization to develop and implement in order to comply with the SMS requirements as specified in GACAR Part 5.
1) This Framework describes the objectives and expectations for an aviation organization’s SMS. 2) This Framework is intended to address only operational and support processes and activities that are related to aviation safety and not to address those related to occupational safety, environmental protection, or customer service quality.
3) Aviation organizations are responsible for the safety of services or products they purchase or contract from other organizations. 4) This document establishes the minimum objectives and expectations for an effective and compliant SMS; aviation organizations may establish additional or stricter requirements.
C. References. This Framework is in accordance with the following documents: • ICAO Annex 19, Safety Management • International Civil Aviation Organization (ICAO) Document 9859 (as amended), ICAO Safety Management Manual (SMM) • ICAO Document 9734, Safety Oversight Manual • FAA Safety Management System Framework for Safety Management System (SMS) Pilot Project Participants and Voluntary Implementation of SMS Programs, Rev. 3
2.2.1.5. DEFINITIONS.p.8
A. Accident. An unplanned event or series of events that results in death, injury, occupational illness, damage to or loss of equipment or property, or damage to the environment. B. Accountable Executive. The single, identifiable person having final responsibility for the effective and efficient performance of the organization’s SMS.
C. Analysis. The process of identifying a question or issue to be addressed, modeling the issue, investigating model results, interpreting the results, and possibly making a recommendation. Analysis typically involves using scientific or mathematical methods for evaluation.
D. Assessment. The process of measuring or judging the value or level of something. E. Attributes. System Attributes, or the inherent characteristics of a system, are present in any well-defined organization and apply to an effective SMS. While the six system attributes were first applied with the FAA’s Air Transportation Oversight System (ATOS), there are conceptual differences when applied to SMS.
F. Audit. Scheduled, formal reviews and verifications that evaluate whether an organization has complied with policy, standards, and/or contract requirements. An audit starts with the management and operations of the organization and then moves to the organization’s activities and products/services.
G. Authority. Who can direct, control, or change the process, as well as who can make key decisions such as risk acceptance. This attribute also includes the concept of empowerment. H. Aviation System. The functional operation or production system used by an organization to produce an aviation product or service (see System and Functional below).
I. Complete. Nothing has been omitted and what is stated is essential and appropriate to the level of detail. J. Conformity. Fulfilling or complying with a requirement [ref. ISO 9001-2000]; this includes but is not limited to complying with Federal regulations. It also includes complying with company requirements, requirements of operator-developed risk controls, or operator policies and procedures.
K. Continuous Monitoring. Uninterrupted (constant) watchfulness (checks, audits, etc.) over a system. L. Controls. Controls are elements of the system, including hardware, software, special procedures or procedural steps, and supervisory practices designed to keep processes on track to achieve their intended results. Organizational process controls are typically defined in terms of special procedures, supervisory and management practices, and processes. Many controls are inherent features of the SMS Framework. Practices such as continuous monitoring, internal audits, internal evaluations, and management reviews (all parts of the safety assurance component) are identified as controls within the design expectations. Additionally, other practices such as documentation, process reviews, and data tracking are identified as controls within specific elements and processes.
M. Corrective Action. Action to eliminate (remove) or mitigate (lessen) the cause or reduce the effects of a detected nonconformity or other undesirable (unwanted) situation. N. Correct. Accurate without ambiguity or error in its attributes.
O. Documentation. Information or meaningful data and its supporting medium (e.g., paper, electronic, etc.). In this context, documentation is different from records because documentation is the written description of policies, processes, procedures, objectives, requirements, authorities, responsibilities, or work instructions; whereas Records are the evidence of results achieved or activities performed.
P. Evaluation. An independent review of company policies, procedures, and systems. If accomplished by the company itself, the evaluation should be done by a person or organization in the company other than the one performing the function being evaluated. The evaluation process builds on the concepts of auditing and inspection. An evaluation is an anticipatory process designed to identify and correct potential problems before they happen. An evaluation is synonymous with the term “systems audit.” Q. External Audit. An audit conducted by an entity outside of the organization being audited (e.g., the flight operations division audits the flight training department).
R. Functional. The term “function” refers to “what” is expected to be incorporated into each process (e.g., human tasks, software, hardware, procedures, etc.) rather than “how” the function is accomplished by the system. This makes for a more performance-based system and allows for a broad range of techniques to be used to accomplish the performance objectives. This, in turn, maximizes scalability while preserving standardization of results across the aviation organization communities.
S. Hazard. Any existing or potential condition that can lead to injury, illness, or death; damage to or loss of a system, equipment, or property; or damage to the environment. A hazard is a condition that might cause (is a prerequisite to) an accident or incident.
T. Incident. It is a near-miss episode with minor consequences that could have resulted in greater loss, or an unplanned event that could have resulted in an accident or did result in minor damage. An incident indicates that a hazard or hazardous condition exists, though it may not identify what that hazard or hazardous condition is.
U. Interfaces. This aspect includes examining such things as lines of authority between departments, lines of communication between employees, consistency of procedures, and clearly delineating lines of responsibility between organizations, work units, and employees. Interfaces are the “Inputs” and “Outputs” of a process.
V. Interfaces in Safety Risk Management and Safety Assurance. Safety Risk Management (SRM) and Safety Assurance (SA) are the key processes of the SMS. They are also highly interactive, especially in the input-output relationships between the activities in the processes. This is especially important where interfaces between processes involve interactions between different departments, contractors, etc. Assessments of these relationships should pay special attention to flow of authority, responsibility, and communication, as well as procedures and documentation.
W. Internal Audit. An audit conducted by, or on behalf of, the organization being audited (e.g., the flight training department audits the flight training department). X. Lessons Learned. Knowledge or understanding gained by experience, which may be positive, such as a successful test or mission, or negative, such as a mishap or failure. Lessons learned should be developed from information obtained from inside and outside of the organization and/or industry.
Y. Likelihood. The estimated probability or frequency, in quantitative or qualitative terms, of an occurrence related to the hazard. Z. Line Management. The management structure that operates (controls, supervises, etc.) the operational activities and processes of the aviation system.
AA. Nonconformity. Non-fulfillment of a requirement [ref. ISO 9001-2000]. This could include but is not limited to, noncompliance with national regulations, company requirements, requirements of operator-developed risk controls, or operator-specified policies and procedures.
BB. Objective. The desired state or performance target of a process. Usually it is the final state of a process and contains the results and outputs used to obtain the desired state or performance target.
CC. Operational Life Cycle. Period of time from implementation of a product/service until it is no longer in use. DD. Organization. Indicates both certificated and non-certificated aviation organizations including air navigation service providers, air operators, repair stations, aerodromes, and flight schools and training organizations.
EE. Outputs. The product or end result of a SMS process that can be recorded, monitored, measured, and analyzed. Outputs are the minimum expectation for the product of each process area and the input for the next process area in succession. Each of the outputs of a process should have a method of measurement specified by the organization. Measures need not be quantitative where this is not practical; however, some method of providing objective evidence of the attainment of the expected output is necessary. A table of SMS Process Outputs is at Figure 1, at the end of this definitions section.
FF. Oversight. A function performed by a regulator (such as the GACA) that ensures that an aviation organization complies with and uses safety-related standards, requirements, regulations, and associated procedures. Safety oversight also ensures that the acceptable level of safety risk is not exceeded in the air transportation system.
GG. Preventive Action. Preemptive action to eliminate or mitigate the potential cause or reduce the future effects of an identified or anticipated nonconformity or other undesirable situation. HH. Procedures. ISO-9001-2000 defines “procedure” as “a specified way to carry out an activity or a process.” Procedures translate the “what” in goals and objectives into “how” in practical activities (things people do). Procedures are simply documented activities to accomplish processes (e.g., a way to perform a process). The organization should specify their own procedures for accomplishing processes in the context of their unique operational environment, organizational structure, and management objectives.
II. Process. A set of interrelated or interacting activities that transform inputs into outputs. JJ. Process Measures. Ways to provide feedback to responsible parties that required actions are taking place, required outputs are being produced, and expected outcomes are being achieved. A basic principle of safety assurance is that fundamental processes be measured so that management decisions can be data-driven. The general expectations for Component 1, Policy, specify that SMS outputs be measured and analyzed. These measurements and analyses are accomplished in Component 3, Safety Assurance. Outputs of each process should, therefore, be identified during Component 3 activities. For example, these outputs should be the subjects of continuous monitoring, internal audits, and internal evaluation.
KK. Product/Service. Anything that is offered or can be purchased that might satisfy a want or need in the air transportation system. LL. Records. Evidence of results achieved or activities performed. MM. Residual Safety Risk. The safety risk that exists after all controls have been implemented or exhausted and verified. Only verified controls can be used for assessing residual safety risk.
NN. Responsibility. Who is accountable for management and overall quality of the process (planning, organizing, directing, controlling) and its ultimate accomplishment. OO. Risk. The composite of predicted severity (how bad) and likelihood (how probable) of the potential effect of a hazard in its worst credible (reasonable or believable) system state. The terms “risk” and “safety risk” are interchangeable.
PP. Risk Control. Steps taken to eliminate (remove) hazards or to mitigate (lessen) their effects by reducing the severity and/or likelihood of risk associated with those hazards. QQ. Safety Assurance (SA). A formal management process within the SMS that systematically provides confidence that an organization’s products/services meet or exceed safety requirements. A Safety Assurance flow diagram (found in Volume 2, Chapter 4, Appendix A, SMS Assessment Guide, Component 2.0) includes the Framework element/process numbers and other notes to help the reader visualize the Framework in terms of a process flow (with interfaces), and understand the component/element/process expectations.
RR. Safety Culture. The product of individual and group values, attitudes, competencies, and patterns of behavior that determine the commitment to, and the style and proficiency of, the organization’s management of safety. Organizations with a positive safety culture are characterized by communications founded on mutual trust, by shared perceptions of the importance of safety, and by confidence in the efficacy of preventive measures.
SS. Safety Management System (SMS). The formal, top-down business-like approach to managing safety risk. It includes systematic procedures, practices, and policies for the management of safety (as described in this document it includes safety risk management, safety policy, safety assurance, and safety promotion).
TT. Safety Manager. The generic term “safety manager” is used and refers to the function, not necessarily to the individual. The person carrying out the safety manager function is responsible to the Accountable Executive for the performance of the SMS and for the delivery of safety services to the other departments in the organization.
UU. Safety Planning. Part of safety management focused on setting safety objectives and specifying needed operational processes and related resources to fulfill these objectives. VV. Safety Promotion —A combination of safety culture, training, and data-sharing activities that support the implementation and operation of an SMS in an organization.
WW. Safety Risk. The composite of predicted severity (how bad) and likelihood (how probable) of the potential effect of a hazard in its worst credible (reasonable or believable) system state. The terms “safety risk” and “risk” are interchangeable.
XX. Safety Risk Control. A characteristic of a system that reduces or mitigates (lessens) the potential undesirable effects of a hazard. Controls may include process design, equipment modification, work procedures, training, or protective devices. Safety risk controls must be written in requirements language, measurable, and monitored to ensure effectiveness.
YY. Safety Risk Management (SRM). A formal process within the SMS that describes the system, identifies the hazards, assesses the risk, analyzes the risk, and controls the risk. The SRM process is embedded in the processes used to provide the product/service; it is not a separate/distinct process.
A process flow diagram of Safety Risk Management may be found in Volume 2, Chapter 4, Appendix A, SMS Assessment Guide, Component 3.0. ZZ. Separate Aviation Maintenance Organizations. Independent maintenance organizations such as, but not limited to, certificated repair stations, non-certificated repair facilities, and separate maintenance organizations. This does not include an air operator’s maintenance organization and is not intended to duplicate the required Component 1.0 B) 1) c) of an air operator’s organization.
AAA. Severity. The degree of loss or harm resulting from a hazard. BBB. Substitute Risk. A risk unintentionally created as a consequence of safety risk control(s). CCC. System. An integrated set of constituent elements that are combined in an operational or support environment to accomplish a defined objective. These elements include people, hardware, software, firmware, information, procedures, facilities, services, and other support facets.
DDD. System Attributes. Refer to definition for Attributes, above. EEE. Top Management. The person or group of people who direct and control an organization [ref. ISO 9000-2000 definition 3.2.7]. In many large organizations, this can be the CEO or the board of directors; in smaller organizations, this might be the owner of the company.
Section 2. SMS Framework Structure and Expectationsp.15
2.2.2.1. SMS FRAMEWORK STRUCTURE. The Safety Management System (SMS) framework is broken down into components, elements, and processes. The components and elements are based on the International Civil Aviation Organization (ICAO) SMS Framework.
A. Components. There are four components of an SMS. Two components represent the core operational activities underlying an SMS, and two components represent the organizational arrangements that are necessary to support the two core operational activities. The four components of an SMS are:
• Safety Policy and Objectives (Component 1.0) • Safety Risk Management (SRM) (Component 2.0) • Safety Assurance (SA) (Component 3.0) • Safety Promotion (Component 4.0) B. Core Operational Activities. The two core operational activities of an SMS are Safety Risk Management and Safety Assurance.
C. Elements. Each of the four components of an SMS is further subdivided into elements, each of which defines important aspects of the component. There are twelve elements in the SMS framework arranged as follows:
1) For Component 1.0 - Safety Policy and Objectives a) Element 1.1 – Safety policy b) Element 1.2 - Management commitment and safety accountabilities c) Element 1.3 - Key safety personnel d) Element 1.4 - Emergency preparedness and response e) Element 1.5 – SMS documentation and records 2) For Component 2.0 - Safety Risk Management (SRM) a) Element 2.1 - Hazard identification and analysis b) Element 2.2 - Risk assessment and control 3) For Component 3.0 - Safety Assurance (SA) a) Element 3.1 - Safety performance monitoring and measurement b) Element 3.2 - The management of change c) Element 3.3 - Continuous improvement 4) For Component 4.0 - Safety Promotion a) Element 4.1 – Competencies and training.
b) Element 4.2 – Communication and awareness. D. Processes. Certain elements in the Safety Risk Management, Safety Assurance and Safety Promotion components are further broken down into processes. 2.2.2.3. SMS FRAMEWORK EXPECTATIONS. To make the SMS Framework easier to understand and use, components, elements, and processes have been defined in terms of functional expectations, or how an organization would need to use them in order to contribute to an effective SMS. They are called “functional” expectations because they describe the “what”, not the “how” of each process. For example, the “what” of a de-icing process is to prevent any aircraft from taking off with ice adhering to any critical surface. The “how” of the de-icing process would include de-icing equipment procedures, flight crew de-icing procedures, holdover table activities, etc., and may be different between individual organizations. Organizations are expected to meet SMS Framework expectations by developing processes to fit their unique business and management models.
A. The SMS functional expectations are further defined in terms of performance objectives and design expectations: 1) Performance Objectives. Performance Objectives are the desired outcomes of the particular element or process.
2) Design Expectations. Design Expectations are the characteristics of the element or process that, if properly implemented, should provide the outcomes identified in the performance objectives. B. The Performance Objectives and Design Expectations for the entire SMS Framework are found in
COMPONENT 1.0 – SAFETY POLICY AND OBJECTIVESp.18
Component Performance Objectives The organization will develop and implement an integrated, comprehensive SMS for its organization and will incorporate a procedure to identify and maintain compliance with all applicable regulatory requirements.
Component General Design Expectations: A. Safety management will be included in the complete scope and life cycle of the organization’s systems including: 1) For air operators: • Flight operations • Operational control (dispatch/flight following) • Maintenance and inspection • Cabin safety • Ground handling and servicing • Cargo handling • Training 2) For separate aviation maintenance organizations:
• Parts/materials • Resource management (tools and equipment, personnel, and facilities) • Technical data • Maintenance and inspection • Quality control • Records management • Contract maintenance • Training 3) For flight training organizations:
• Flight operations • Operational control (dispatch/flight following) • Maintenance and inspection • Training 4) For aerodromes: • Operations including all weather operations • Runway safety • Ground handling and servicing • Maintenance and inspection • Construction 5) For air traffic services providers:
• Air traffic control and separation • Airspace design • New or changed ATS procedures • Training B. SMS processes will be: • Documented • Monitored • Measured • Analyzed C. SMS outputs will be: • Recorded • Monitored • Measured • Analyzed D. It is expected that:
• The organization will promote the growth of a positive safety culture (described under Component 4.0, B) • If the organization has a quality policy, top management will ensure that the quality policy is consistent with the SMS • The SMS will include a means to comply with all applicable GACA regulatory requirements.
• The organization will establish and maintain a procedure to identify all applicable current and forthcoming GACA regulatory requirements applicable to the organization • The organization will establish and maintain procedures with measurable criteria to accomplish the objectives of the safety policy • The organization will establish and maintain supervisory and operational controls to ensure procedures are followed for safety-related operations and activities • The organization will establish and maintain a safety management plan to describe how it will achieve its safety objectives
ELEMENT 1.1 - SAFETY POLICYp.21
Performance Objective Top management will define the organization’s safety policy and convey its expectations and objectives to its employees. Design Expectations: A. Top management will define the organization’s safety policy.
B. The safety policy will: • Include a commitment to implement an SMS • Include a commitment to continual improvement in the level of safety • Include a commitment to the management of safety risk • Include a commitment to comply with applicable regulatory requirements • Include a commitment to encourage employees to report safety issues without reprisal (as per Process 3.1.6) • Establish clear standards for acceptable behavior • Provide management guidance for setting safety objectives • Provide management guidance for reviewing safety objectives • Be documented • Be communicated with visible management endorsement to all employees and responsible parties • Be reviewed periodically to ensure it remains relevant and appropriate to the organization • Identify responsibility and accountability of management and employees with respect to safety performance
ELEMENT 1.2 - MANAGEMENT COMMITMENT AND SAFETY ACCOUNTABILITIESp.22
Performance Objective The organization will define, document, and communicate the safety roles, responsibilities, and authorities throughout its organization. Design Expectations: A. Top management will have the ultimate responsibility for the SMS.
B. Top management will provide resources essential to implement and maintain the SMS. C. Aviation safety-related positions, responsibilities, and authorities will be: • Defined • Documented • Communicated throughout the organization D. The organization will define levels of management that can make safety risk acceptance decisions.
ELEMENT 1.3 - KEY SAFETY PERSONNELp.22
Performance Objective The organization will appoint a safety manager to manage, monitor, and coordinate the SMS processes. Design Expectations: A. Top management will appoint a member of management who, irrespective of other responsibilities, will have responsibilities and authority that includes:
• Ensuring that processes needed for the SMS are established, implemented, and maintained • Report to top management on the performance of the SMS and the need for improvement • Ensure the promotion of awareness of safety expectations throughout the organization B. The appointed Safety Manager shall not hold any operational responsibilities to avoid any potential conflict of interest.
ELEMENT 1.4 - EMERGENCY PREPAREDNESS AND RESPONSEp.23
Performance Objective The organization will develop and implement procedures that it will follow in the event of an accident or incident to mitigate the effects of these events. Design Expectations: A. The organization will establish procedures to:
• Identify hazards that have potential for accidents and incidents • Coordinate and plan the organization’s response to accidents and incidents • Execute periodic exercises of the organization’s response
ELEMENT 1.5 - SMS DOCUMENTATION AND RECORDSp.23
Performance Objectives The organization will have documented safety policies; objectives, procedures, a document/record management process, and a safety management plan that meet organizational safety expectations and objectives.
Design Expectations: A. The organization will establish and maintain information, in paper or electronic form, to describe: • Safety policies • Safety objectives • SMS expectations • The Scope of SMS requirements.
• Accountability, responsibilities, and authorities of Accountable Executive and safety personnel. • Safety procedures and processes • Interactions/interfaces between the safety-related procedures and processes • SMS outputs B. The organization will maintain their safety management plan in accordance with the objectives and expectations contained within this element (1.5).
C. Documentation Management. 1) Documentation will be: • Legible • Dated (with dates of revisions) • Readily identifiable • Maintained in an orderly manner • Retained for a specified period of time as determined by the organization 2) The organization will establish and maintain procedures for controlling all documents required by this Framework to ensure that:
• They can be located • They are periodically: o Reviewed o Revised as needed o Approved for adequacy by authorized personnel 3) The current versions of relevant documents are available at all locations where essential SMS operations are performed.
4) Obsolete documents are promptly removed from all points of use or otherwise assured against unintended use. D. Records Management. 1) The organization will establish and maintain procedures to: • Identify • Maintain • Dispose of their SMS records 2) SMS records will be:
• Legible • Identifiable • Traceable to the activity involved 3) SMS records will be maintained in such a way that they are: • Readily retrievable • Protected against: o Damage o Deterioration o Loss 4) Records retention times will be documented.
COMPONENT 2.0 - SAFETY RISK MANAGEMENT (SRM)p.25
Component Performance Objective The organization will develop processes to understand the critical characteristics of its systems and operational environment and apply this knowledge to identify hazards, analyze and assess risk, and design risk controls.
Component General Design Expectations: A. Safety Risk Management (SRM) will, at a minimum, include the following processes: • System and task analysis • Hazard identification • Safety risk analysis • Safety risk assessment • Safety risk control and mitigation B. The SRM process will be applied to:
• Initial designs of systems, organizations, and/or products • The development of operational procedures • Hazards that are identified in the safety assurance functions (described in Component 3.0, B) • Planned changes to operational processes C. The organization will establish feedback loops between assurance functions described in Component 3.0 to evaluate the effectiveness of safety risk controls.
D. The organization will define a risk acceptance process that: • Defines acceptable and unacceptable levels of safety risk. • Describes: o Severity levels o Likelihood levels • Defines specific levels of management that can make safety risk acceptance decisions • Defines acceptable risk for hazards that will exist in the short-term while safety risk control/mitigation plans are developed and executed
ELEMENT 2.1 - HAZARD IDENTIFICATION AND ANALYSISp.27
Process 2.1.1 - System Description and Task Analysis Performance Objective The organization will analyze its systems, operations, and operational environment to gain an understanding of critical design and performance factors, processes, and activities to identify hazards.
Design Expectations: A. System descriptions and task analysis will be developed to the level of detail necessary to: • Identify hazards • Develop operational procedures • Develop and implement risk controls
PROCESS 2.1.2 - IDENTIFY HAZARDSp.27
Performance Objective The organization will identify and document the hazards in its operations that are likely to cause death, serious physical harm, or damage to equipment or property in sufficient detail to determine associated level of risk and risk acceptability.
Design Expectations: A. Hazards will be: • Identified for the entire scope of the system, as defined in the system description • Identified based on reactive and proactive methods. • Documented B. Hazard information will be:
• Tracked • Managed through the entire SRM process
ELEMENT 2.2 - RISK ASSESSMENT AND CONTROLp.28
Process 2.2.1 - Analyze Safety Risk Performance Objective The organization will determine and analyze the severity and likelihood of potential events associated with identified hazards, and will identify factors associated with unacceptable levels of severity or likelihood.
Design Expectations: A. The safety risk analysis process will include: • Existing safety risk controls • Triggering mechanisms • Safety risk of reasonably likely outcomes from the existence of a hazard, to include estimation of the:
o Likelihood o Severity
ELEMENT 2.2 - RISK ASSESSMENT AND CONTROLp.28
Process 2.2.2 - Assess Safety Risk Performance Objective The organization will assess risk associated with each identified hazard and define risk acceptance procedures and levels of management that can make safety risk acceptance decisions.
Design Expectations Each hazard will be assessed for its safety risk acceptability using the safety risk acceptance process described in Component 2.0 B).
ELEMENT 2.2 - RISK ASSESSMENT AND CONTROLp.29
Process 2.2.3 - Control/Mitigate Safety Risk Performance Objective The organization will design and implement a risk control for each identified hazard for which there is an unacceptable risk, to reduce to acceptable levels the potential for death, serious physical harm, or damage to equipment or property. The residual or substitute risk will be analyzed before implementing any risk control.
Design Expectations: A. Safety control/mitigation plans will be defined for each hazard with unacceptable risk. B. Safety risk controls will be: • Clearly described • Evaluated to ensure that the expectations have been met • Ready to be used in their intended operational environment • Documented C. Substitute risk will be evaluated when creating safety risk controls/mitigations.
COMPONENT 3.0 - SAFETY ASSURANCEp.29
Component Performance Objective The organization will monitor, measure, and evaluate the performance and effectiveness of risk controls. Component General Design Expectations: A. The organization will monitor their systems and operations to:
• Identify new hazards • Measure the effectiveness of safety risk controls • Ensure compliance with regulatory requirements applicable to the SMS • Ensure that the safety assurance function is based upon a comprehensive system description as described in Process 2.1.1 B. The organization will collect the data necessary to demonstrate the effectiveness of it’s:
• Operational processes • SMS
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.30
Process 3.1.1 - Continuous Monitoring Performance Objective The organization will monitor operational data, including products and services received from contractors, to identify hazards, measure the effectiveness of safety risk controls, and assess system performance.
Design Expectations: A. The organization will monitor operational data (e.g., duty logs, crew reports, work cards, process sheets, and reports from the employee safety feedback system specified in Process 3.1.6) to:
• Determine conformity to safety risk controls (described in Process 2.2.3) • Measure the effectiveness of safety risk controls (described in Process 2.2.3) • Assess SMS system performance • Identify hazards B. The organization will monitor products and services received from subcontractors.
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.30
Process 3.1.2 - Internal Audits by Operational Departments Performance Objective The organization will perform regularly scheduled internal audits of its operational processes, including those performed by contractors, to determine the performance and effectiveness of risk controls.
Design Expectations: A. Line management of operational departments will conduct regular internal audits of safety-related functions of the organization’s operational processes (production system). These audits will include any subcontractors who perform those functions.
NOTE: The internal audit is a primary means of output measurement under Component 1.0, C). B. Line management will ensure that regular audits are conducted to: • Determine conformity with safety risk controls • Assess performance of safety risk controls C. Planning of the audits program will take into account:
• Safety criticality of the processes to be audited • The results of previous audits D. The organization will define: • Audits, including: o Criteria o Scope o Frequency o Methods • How they will select the auditors • The requirement that auditors will not audit their own work E. The organization will document audit procedures, to include:
• The responsibilities • Expectations for: o Planning audits o Conducting audits o Reporting results o Maintaining records o Auditing contractors and vendors
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.32
Process 3.1.3 - Internal Evaluation Performance Objective The organization will conduct internal evaluations of the SMS and operational processes at planned intervals to determine that the SMS conforms to its objectives and expectations.
Design Expectations: A. The organization will conduct internal evaluations of the operational processes and the SMS at planned intervals to determine that the SMS conforms to objectives and expectations NOTE: Sampling of SMS output measurement is a primary control under Component 1.0, C).
B. Planning of the evaluation program will take into account: • Safety criticality of the processes being evaluated • The results of previous evaluations C. The organization will define: • Evaluations, including:
o Criteria o Scope o Frequency o Methods • The processes used to select the evaluators D. Documented procedures, which include: • The responsibilities • Requirements for: o Planning evaluations o Conducting evaluations o Reporting results o Maintaining records o Evaluating contractors and vendors E. The program will include an evaluation of the programs described in Component 1.0, B).
F. The person or organization performing evaluations of operational processes must be independent of the process being evaluated.
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.33
Process 3.1.4 - External Auditing of the SMS Performance Objective The organization will include the results of assessments performed by oversight organizations in its analysis of data. Design Expectations The organization will include the results of oversight organization assessments in the analyses conducted as described in Process 3.1.7.
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.34
Process 3.1.5 - Investigation Performance Objective The organization will establish procedures to collect data and investigate incidents, accidents, and instances of potential regulatory non-compliance to identify potential new hazards or risk control failures.
Design Expectations: A. The organization will collect data on: • Incidents • Accidents • Real and potential regulatory non-compliance B. The organization will establish procedures to: • Investigate accidents • Investigate incidents • Investigate instances of real and potential regulatory non-compliance
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.34
Process 3.1.6 - Employee Reporting and Feedback System Performance Objective The organization will establish and maintain mandatory, voluntary and confidential safety reporting and feedback system. Data obtained from this system will be monitored to identify emerging hazards and to assess performance of risk controls in the operational systems.
Design Expectations: A. The organization will establish and maintain mandatory, voluntary and confidential employee safety reporting and feedback system as in Component 4.0 B). B. Employees will be encouraged to use the voluntary and confidential safety reporting and feedback system without fear of reprisal and to submit solutions/safety improvements where possible.
C. Data from the safety reporting and feedback system will be monitored to identify emerging hazards. D. Data collected in the safety reporting and feedback system will be included in analyses described in Process 3.1.7.
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.35
Process 3.1.7 - Analysis of Data Performance Objective The organization will analyze the data described in Processes 3.1.1 through 3.1.6 to assess the performance and effectiveness of risk controls in the organization’s operational processes and the SMS, and to identify root causes of deficiencies and potential new hazards.
Design Expectations: A. The organization will analyze the data described in Processes 3.1.1 through 3.1.6 to demonstrate the effectiveness of: • Risk controls in the organization’s operational processes • The SMS B. Through data analysis, the organization will evaluate where improvements can be made to the organizations:
• Operational processes • The SMS
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.36
Process 3.1.8 - System Assessment Performance Objective The organization will perform an assessment of the performance and effectiveness of risk controls, conformance to SMS expectations as stated herein, and the objectives of the safety policy.
Design Expectations: A. The organization will assess the performance of: • Safety-related functions of operational processes against their objectives and expectations • The SMS against its objective and expectations B. System assessments will document results that indicate a finding of:
• Conformity with existing safety risk control(s)/SMS expectations(s) (including regulatory requirements) • Nonconformity with existing safety risk control(s)/SMS expectations(s) (including regulatory requirements) • New hazard(s) found C. The SRM process will be utilized if the assessment indicates:
• The identification of new or potential hazards • The need for system changes D. The organization will maintain records of assessments in accordance with the expectations of Element 1.5.
ELEMENT 3.2 - MANAGEMENT OF CHANGEp.36
Performance Objective The organization’s management will identify and determine acceptable safety risks for changes within the organization that may affect established processes and services by the introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to existing system designs, new operations or procedures or modifications to existing operations or procedures.
Design Expectations: A. The following will not be implemented until the safety risk of each identified hazard is determined to be acceptable in: • Introduction of new technology or equipment. • Changes in the operating environment.
• Changes in key personnel. • Significant changes in staffing levels. • Changes in safety regulatory requirements. • Significant restructuring of the organization. • Introducing new type of aircraft • Introducing new procedures including maintenance and operational procedures.
• Physical changes such as facilities including operation setup, aerodrome, and ground facilities. B. Change management hazard identification and analysis will be completed prior introducing the change into operation, this will include but not limited to;
- Preliminary Safety Analysis (PSA) including but not limited to; - Hazardous components. - Safety related interfaces between various system elements including software. - Environmental constraints including operating environments.
- Operating, test, maintenance and emergency procedures. - Facilities, support equipment, and training. - Safety related equipment. - Malfunctions to the system, subsystems, or software. - Preliminary Safety Analysis (PSA), will consider the following but not limited to;
•Establishment of PRA team •Define and describe the system to be analyzed •Collect risk information from previous and similar systems C. An operator may use a simplified framework namely, Training, Equipment, Personnel, Infrastructure, Organization, Information, Logistics, Quality, and Contingency (TEPIOILQC) arrangement for effectively implementing safety management.
D. It is recommended to consider TEPIOILQC to understand how the change will affect the current TEPIOILQC, does the change requires a new arrangement in the TEPIOILQC, is there a safety risk on TERIOILQC and consider the common source of risks in the PRA;
•Preliminary Hazards Lists (PHLs). •Operation Readiness case (ORC) – this document will be provided as the last document prior the handover to the operations phase which will include any residual safety risks in the PRA and PRR.
E. The SRM process may allow an organization to take interim immediate action to mitigate existing safety risk. F.The expectation for robust, safety analysis for any change should recognize the organization’s working environment that the organization operates within. The working environment can often be articulated via the acronym ‘TEPIOILQC’ which needs to be considered when assessing the desired change.
G. Elements of the Working Environment These include: • Training – The provision of the means to practice, develop and validate the competence of sufficient Personnel for the change. • Equipment – The provision of systems and equipment, needed to outfit/equip an individual, group or organization.
• Personnel – The timely provision of sufficient and motivated personnel to deliver the change outputs. • Information – The provision of a coherent development of approved data, information, and knowledge requirements for capabilities and all processes designed to gather and handle data, information, and knowledge.
• Organization – Relates to the operational and non-operational organizational relationships of people. It is typically made up of organizational structures. • Infrastructure – The development, management, and disposal of all permanent buildings and structures, land, utilities, and facility management services. It includes estate development and structures that support the organization’s personnel.
• Logistics – Relates to the aspect of product operations which deals with; the design and development, storage, transport, distribution, maintenance, evacuation, and disposition of material (supply chain).
• Quality – The deployed robust Quality Management System in place to support the on-going safety of the change. • Contingency arrangement – the robust emergency plans and business continuity procedures in place, and practiced.
ELEMENT 3.3 - CONTINUAL IMPROVEMENTp.39
Performance Objective The organization will promote continual improvement of its SMS through recurring application of Safety Risk Management (Component 2.0), Safety Assurance (Component 3.0), and by using safety lessons learned and communicating them to all personnel.
Design Expectations: A. The organization will continuously improve SMS and safety risk control effectiveness through the use of the safety and quality policies, objectives, audit and evaluation results, analysis of data, corrective and preventive actions, and management reviews.
B. The organization will develop safety lessons learned. 1) Lessons learned information will be used to promote continuous improvement of safety; and 2) The organization will communicate information on safety lessons learned throughout the organization.
ELEMENT 3.3 - CONTINUAL IMPROVEMENTp.40
Process 3.3.1 - Preventive/Corrective Action Performance Objective The organization will take corrective and preventive action to eliminate the causes of nonconformance identified during analysis, to prevent recurrence.
Design Expectations: A. The organization will develop: • Corrective actions for identified nonconformities with risk controls • Preventive actions for identified potential nonconformities with risk controls B. Safety lessons learned will be considered in the development of:
• Corrective actions • Preventive actions C. The organization will take necessary corrective and preventive action based on the findings of investigations. D. The organization will prioritize and implement corrective and preventative action(s) in a timely manner.
E. Records will be kept and maintained of the disposition and status of corrective and preventive actions.
ELEMENT 3.3 - CONTINUAL IMPROVEMENTp.40
Process 3.3.2 - Management Review Performance Objective Top management will conduct regular reviews of the SMS, including outputs of safety risk management, safety assurance, and lessons learned. Management reviews will include assessing the performance and effectiveness of an organization’s operational processes and the need for improvements.
Design Expectations: A. Top management will conduct regular reviews of the SMS, including: • The outputs of safety risk management (Component 2.0) • The outputs of safety assurance (Component 3.0) • Lessons learned (Element 3.3, B) B. Management reviews will include assessing the need for improvements to the organization’s:
• Operational processes • SMS C. The organization will communicate information on safety lessons learned to all personnel.
COMPONENT 4.0 - SAFETY PROMOTIONp.41
Top Management will promote the growth of a positive safety culture and communicate it throughout the organization. General Design Expectations: A. Top management will promote the growth of a positive safety culture by:
• Publication of senior management’s stated commitment to safety to all employees • Visibly demonstrating their commitment to the SMS • Communicating the safety responsibilities for the organization’s personnel • Clearly and regularly communicating safety policy, goals, objectives, standards, and performance to all organizational employees • Creating an effective employee reporting and feedback system that provides confidentiality, as needed • Using a safety information system that provides an accessible, efficient means to retrieve safety information • Making essential resources available to implement and maintain the SMS
ELEMENT 4.1 - COMPETENCIES AND TRAININGp.42
Process 4.1.1 - Personnel Expectations (Competence) Performance Objective The organization will document competency requirements for those positions identified in Element 1.2 C) and 1.3 and ensure those requirements are met.
Design Expectations: A. The organization will determine and document competency requirements for those positions identified in Element 1.2 C) and 1.3. B. The organization will ensure that those individuals in the positions identified in Element 1.2 C) and 1.3, meet the Process 4.1.1 A) competency requirements.
ELEMENT 4.1 - COMPETENCIES AND TRAININGp.42
Process 4.1.2 - Training Performance Objective The organization will develop, document, deliver, and regularly evaluate training necessary to meet competency requirements of 4.1.1. Design Expectations:
A. Training needed to meet competency requirements of 4.1.1 will be developed for those individuals in the positions identified in Element 1.2 and 1.3. B. Training development will consider scope, content, and frequency of training required to maintain competency for those individuals in the positions identified in Element 1.2 and 1.3.
C. Employees will receive training commensurate with their: • Position level within the organization • Impact on the safety of the organization’s products or services D. To ensure training currency, it will be periodically:
• Reviewed • Updated
ELEMENT 4.2 - COMMUNICATION AND AWARENESSp.43
Performance Objective Top Management will communicate the outputs of its SMS to its employees, and will provide its oversight organization access to SMS outputs in accordance with established agreements and disclosure programs.
Design Expectations: A. The organization will communicate outputs of the SMS to its employees. B. The organization will provide its oversight organization access to the outputs of the SMS. C. The organization’s SMS will be able to inter-operate with other organizations’ SMSs to cooperatively manage issues of mutual concern.
2.3.1.1. PURPOSE.p.44
A. SMS Implementation. This section contains guidance, expectations, and procedures necessary to implement a Safety Management System (SMS) by aviation organizations (air navigation service providers, air operators, aviation maintenance organizations, aerodromes, flight training organizations, etc.) that are eligible for a phased implementation of a SMS, as provided for by the General Authority of Civil Aviation Regulation (GACAR) Part 199.
B. Objective. The overall objective of this implementation guidance is to assist the General Authority of Civil Aviation (GACA) aviation safety inspector (Inspector) in evaluating organizations’ SMS implementation activities in order to ensure compliance with the GACAR phased implementation requirements.
2.3.1.3. APPLICABILITY. This implementation guidance is designed for use in determining the acceptability of an existing aviation organization’s SMS phased implementation activities. Phased implementation guidance is not designed to be used by new applicants wishing to commence operations under GACAR Parts 119, 139, 141, 142, 145 or 171. These aviation organizations are required to have an SMS implemented at the time of their initial certification. This implementation guidance is based on the SMS Framework in Chapter 2 of this Volume.
2.3.1.5. REFERENCES. The following references are recommended reading material for users of this implementation guide in development and implementation of an SMS: • GACAR Part 199 • International Civil Aviation Organization (ICAO) Document 9859 (as amended), ICAO Safety Management Manual (SMM) – especially the chapter which addresses the PHASED APPROACH
TO SMS IMPLEMENTATIONp.44
• SMS Framework Guidance in Chapter 2 of this volume • SMS Assessment Guidance in Chapter 4 of this volume • SMS Acceptance Guidance in Chapter 5 of this volume (especially Section 3, Accepting Phased Implementation of SMS)
2.3.1.7. GUIDANCE DOCUMENTS AND TOOLS.p.45
A. SMS Program Guidance. 1) SMS Framework Guidance. The GACA developed SMS Framework Guidance is the standard for implementation of SMS by aviation organizations. It is similar in scope and format to International Organization for Standardization (ISO) standards and is modeled after the safety, quality, and environmental management standards developed by a variety of organizations such as ISO, and the International Air Transportation Association (IATA). The SMS Framework also incorporates the current safety management requirements of the International Civil Aviation Organization (ICAO), and it is closely aligned with the current ICAO SMS Framework.
2) SMS Assessment and Acceptance Guidance. The GACA has developed SMS Assessment guidance (Volume 4) and acceptance guidance (Volume 5) as a tool for aviation organizations and GACA staff. The SMS assessment and acceptance guidance represents each functional expectation found in the SMS Framework in the form of a question and is intended to be used during the development and implementation of a SMS by an organization or by the GACA staff for oversight guidance. Since the SMS assessment and acceptance guidance is based entirely on the SMS Framework, compliance with the SMS assessment and acceptance guidance will ensure compliance with the SMS Framework and the phased implementation milestones prescribed under GACAR Parts 5 and 199.
3) SMS Implementation Guidance. The SMS Implementation Guidance contains the expectations and procedures necessary to implement an SMS. 4) Gap Analysis Processes and Tools. An initial step in developing an SMS is for the aviation organization to analyze and assess its existing programs, systems, processes, and activities with respect to the SMS functional expectations found in the SMS Framework. This process is called a “gap analysis”; the “gaps” being those elements in the SMS Framework that are not already being performed by the aviation organization.
NOTE: The gap analysis processes cover all areas of company operations that are subject to regulatory control in accordance with the GACARs and all elements of the SMS Framework.
2.3.2.1. ROLES, RESPONSIBILITIES, AND RELATIONSHIPS.p.46
A. Aviation Organizations. The Safety Management System (SMS) Framework provides guidance for an aviation organization to develop and document its SMS. A separate SMS manual is not specifically required; however, many aviation organizations find a separate SMS manual useful. The SMS may be documented in a form and manner that best serves the aviation organization’s need;
however, any modifications of existing General Authority of Civil Aviation (GACA) approved/accepted programs and their associated documents must be coordinated with the GACA. Safety policies developed by aviation organization’s top management will be clearly communicated throughout the entire organization. Safety Risk Management (SRM) and Safety Assurance (SA) programs will be developed and maintained. Safety Promotion (SP) activities will take place to instill or reinforce a positive safety culture throughout the organization. Additionally, the appointed Safety Manager shall not hold any operational responsibilities to avoid any potential conflict of interest.
B. Oversight Organization. The General Authority of Civil Aviation (GACA) office that normally provides regulatory safety oversight of the aviation organization will be referred to as the “oversight organization” and will continue all of its normal oversight and certificate management duties. As organizations develop their SMS, a natural interaction between the safety management efforts of the oversight organization and those of the aviation organization will develop. This relationship can leverage the efforts of both parties to provide a more effective, efficient, and proactive approach to meeting safety requirements while at the same time increasing the flexibility of the aviation organization to tailor their safety management efforts to their individual business models.
1) In order to fully understand the aviation organization’s approach to SMS, it is important for the oversight organization to be fully engaged during SMS development and implementation. Engagement with the aviation organization during SMS development will also provide the oversight organization with an opportunity to gain experience in oversight of the SMS, as well as using SMS as a tool for interfacing with the aviation organization’s management. The oversight organization will also be responsible for reviewing the aviation organization’s implementation plan and its accomplishment at each maturity level of the SMS implementation.
2) Specifically, the oversight organization is responsible to: • Oversee and review gap analysis processes • Review and accept the aviation organization’s implementation plan and other documents • Discuss the requirements of the exit criteria for all implementation phases with the aviation organization. Exit criteria are those SMS development activities that must be completed prior to moving to the next implementation phase 2.3.2.3. LIMITATIONS. The SMS requirements specified in GACAR Part 5 are specific safety related requirements that aviation organizations must comply with in addition to all other GACAR parts that are applicable to their operation.
2.3.3.1. SMS IMPLEMENTATION STRATEGY.p.48
A. SMS Implementation. SMS implementation strategy includes four different processes given in Figure 2.3.3.1 as outlined in the ICAO Safety Management Manual (SMM). The process of implementing SMS must include the following elements.
• Planning and Organizing SMS implementation • Safety Risk Management • Proactive, Reactive, and Predictive Processes • Continuous Improvement and Safety Assurance B. The development and implementation of an SMS. This task is best accomplished by breaking down the task into smaller, more manageable subcomponents. In this way, overwhelming and sometimes confusing complexity, and its underlying workload, may be turned into simpler and more transparent subsets of activities that only require minor increases in workloads and resources. This partial allocation of resources may be more commensurate with the requirements of each activity as well as the resources available to the aviation organization.
C. Justification. The reasons that justify why a phased approach to SMS implementation is recommended can be expressed as: (a) providing a manageable series of steps to follow in implementing an SMS, including allocation of resources; and (b) effectively managing the workload associated with SMS implementation.
D. Cosmetic Compliance. A aviation organization should set as its objective the realistic implementation of a comprehensive and effective SMS, not the tokens of it. You simply cannot “buy” an SMS system or manual and expect the benefits of a fully implemented SMS.
E. Feedback. Implementation experiences have shown that while full SMS implementation will certainly take longer with a phased approach, the robustness of the resulting SMS will be enhanced and early benefits realized as each implementation phase is completed. In this way, simpler safety management processes are established and benefits realized before moving on to processes of greater complexity. This is especially true with regard to Safety Risk Management (SRM). In the reactive phase (Level 2), an aviation organization will build an SRM system around known hazards that are already identified. This allows company resources to be focused on developing risk analysis, assessment and control processes (that frequently resolve old long-term issues and hazards) unencumbered by the complexities necessary at the proactive/predictive (Level 3) and the continuous improvement phase (Level 4).
F. Summary. Guidance for a phased implementation of SMS aims at: • Providing a manageable series of steps to follow in implementing an SMS, including allocation of resources • Effectively managing the workload associated with SMS implementation • Pre-empting a “box checking” exercise • Realization of safety management benefits and return on investment during an SMS implementation project 2.3.3.3. IMPLEMENTATION LEVELS. The overall objective of the levels is to develop and implement an integrated, comprehensive SMS for the organization.
A. Implementation Orientation & Commitment. SMS implementation by the aviation organization begins with a recognition that the GACAR Part 5 is applicable, and the aviation organization’s top management commitment to begin the steps of initiating the SMS development process, including gathering necessary information, evaluating corporate goals and objectives, and committing resources to the SMS implementation effort.
B. Implementation Level One: Planning and Organization. Level One begins when an aviation organization’s top management commits to providing the resources necessary for full implementation of SMS throughout the organization.
1) Gap Analysis. The first step in developing an SMS is for the aviation organization to analyze its existing programs, systems, and activities with respect to the SMS functional expectations found in the SMS Framework. This analysis is a process and is called a “gap analysis,” the “gaps” being those components, elements and processers in the SMS Framework that are not already being performed by the aviation organization.
• The Gap Analysis process should consider and encompass the entire organization (e.g., functions, processes, organizational departments, etc.) to be covered by the SMS. • The gap analysis should be continuously updated as the aviation organization progresses through the SMS implementation process 2) Implementation Plan. Once the gap analysis has been performed, an implementation plan is prepared. The implementation plan is simply a “road map” describing how the aviation organization intends to close the existing gaps by meeting the objectives and expectations in the SMS Framework. The implementation plan must be accepted by the GACA before specific implementation activities incorporated in the plan can be considered finalized.
a) While no actual development activities are expected during level one, beyond those listed in the SMS Framework, Elements 1.1, 1.2 (partial), 1.3 and 4.1.1 (partial), the aviation organization organizes resources, assigns responsibilities, sets schedules, and defines objectives necessary to address all gaps identified.
b) It should be noted that at each level of implementation, top management’s approval of the implementation plan must include allocation of necessary resources IAW element 1.2. 3) Level 1 – Exit Expectations. The following items are required prior to Level 1 exit:
• Objective evidence of top management’s commitment to implement SMS, define safety policy and convey safety expectations and objectives to all employees • Objective evidence of top management’s commitment to insure adequate resources are available to implement SMS • Designation of an accountable executive who will be responsible for SMS development • Definition of safety-related positions for those who will participate in SMS development and implementation • Completed gap analysis’ on the entire organization for all elements of the SMS Framework • Completed comprehensive SMS implementation plan for all elements to take the organization through Level 4. This SMS implementation plan must be accepted by GACA.
Full details on how the SMS implementation plan is accepted are contained in Chapter 5 to this Volume. • Identified safety competencies required, completed training appropriate to Level 1, implementation phase identified for competencies required, and a training plan for all employees C. Implementation Level Two: Reactive Process, Basic Safety Risk Management. At level two, the aviation organization develops and implements a basic SRM process and plan, and organizes and prepares the organization for further SMS development. Information acquisition, processing, and analysis functions are implemented and a tracking system for risk control and corrective actions are established. At this phase, the aviation organization corrects known deficiencies in safety management practices and operational processes, develops an awareness of hazards, and responds with appropriate systematic application of preventative or corrective actions. This allows the aviation organization to react to unwanted events and problems as they occur and develop appropriate remedial action. For this reason, this level is termed “reactive.” 1) Level 2 – Exit Expectations. The following items are required prior to Level 2 exit:
• Processes and procedures documented for operating the SMS to the level of reactive analysis, assessment and mitigating actions • Develop documentation relevant to SMS implementation plan and SRM components (reactive processes) • Document and initiate voluntary non-punitive employee reporting and feedback program;
• Completed SMS training for the staff directly involved in the SMS process and initiated training for all employees to at least the level necessary for the SMS reactive processes • Apply Safety Risk Management (SRM) processes and procedures to at least one known (existing) hazard and initiate the mitigation process to control/mitigate the risk associated with the hazard • Update the detailed gap analysis on the entire organization for all elements of the SMS Framework • Update the comprehensive SMS implementation plan for all elements to take the organization through Level 4 D. Implementation Level. Three: Proactive/Predictive Processes, Looking Ahead. The activities involved in the SRM processes involve careful analysis of systems and tasks involved;
identification of potential hazards in these functions; and development of risk controls. The risk management process developed at level two is used to analyze, document, and track these activities. Because the aviation organization is now using the processes to look ahead, this level is termed “proactive/predictive.” At this level, however, these proactive/predictive processes have been implemented but their performance has not yet been proven. (Fully Functioning SMS) Component 2.0 of the SMS Framework expects SRM to be applied to:
• Initial design of systems, processes, organizations, and products • Development of operational procedures • Planned changes to operational processes 1) Level 3 – Exit Expectations. The following items are required prior to Level 3 exit:
• Demonstrated performance of Level 2 requirements • Objective evidence that all SMS processes are being updated, maintained and practiced • Objective evidence that the Safety Risk Management process has been conducted on all Component 2.0 operating processes • Objective evidence of compliance with Process 2.1.1 • Objective evidence of compliance with Element 3.2 • Objective evidence of compliance with Element 4.1 • Objective evidence of compliance with Process 4.1.1 • All applicable SMS processes and procedures must have been applied to at least one existing hazard and the mitigation process must have been initiated • Complete SMS training for the staff directly involved in the SMS process to the level of accomplishing all SMS processes • Complete employee training commensurate with the requirements of Level 3 E. Implementation Level Four: Continuous Improvement, Continued Assurance. The final level of SMS maturity is the continuous improvement level. Processes have been in place, and their performance and effectiveness have been verified. The complete SA process, including continuous monitoring and the remaining features of the other SRM and SA processes are functioning. A major objective of a successful SMS is to attain and maintain this continuous improvement status for the life of the organization.
Section 4. Analysis and Implementationp.53
2.3.4.1. ANALYSIS PROCESSES. Guidance and tools have been developed for use in directing and evaluating progress though the SMS phased implementation process. These tools are based on performance objectives and design expectations developed for each Component, Element, and Process of the SMS Framework.
• The SMS Framework is based on ICAO and GACA requirements/guidance • The SMS Assessment Guide and SMS Acceptance Guide are based upon the SMS Framework, in question form • The Gap Analysis Tools are based upon the ICAO SMS Gap Analysis tools in a user-friendly format A. System Description and Analysis. Prior to performing the gap analysis process, the aviation organization should conduct an analysis of all of the organization’s operational functions, programs, processes, and documentation in order to fully understand how their existing operations compare to the SMS framework.
B. Gap Analysis. The phased implementation of an SMS requires an aviation organization to conduct an analysis of its system to determine which components and elements of an SMS are currently in place and which components and elements must be added or modified to meet the implementation requirements. This analysis is known as gap analysis, and it involves comparing the SMS requirements against the existing resources of the aviation organization.
1) A gap analysis tool based on the ICAO SMS Gap Analysis tool is available for the use of aviation organizations. The gap analysis tool provides, in checklist format, information to assist in the evaluation of the components and elements that comprise the SMS framework and to identify the components and elements that will need to be developed. Each question in the checklist is designed for a “Yes” or “No” response. A “Yes” answer indicates that the aviation organization already has the component or element of the SMS framework in question incorporated into its system and that it either matches or exceeds the requirement. A “No” answer indicates that a gap exists between the component/element of the SMS framework and the aviation organization’s system. Once the gap analysis is complete and documented, it will form one basis of the SMS implementation plan.
2.3.4.3. IMPLEMENTATION PLAN. Based on the results of the gap analysis process, an implementation plan is prepared to “fill the gaps”, the “gaps” being those elements in the SMS Framework that have not completely met expectations (e.g., are not already being performed) by the aviation organization. The SMS implementation plan is a realistic strategy for the implementation of an SMS that will meet the aviation organization’s safety objectives while supporting effective and efficient delivery of services. It describes how the aviation organization will achieve its corporate safety objectives and how it will meet any new or revised safety requirements, regulatory or otherwise. Further guidance on how to develop an SMS implementation plan is contained in the ICAO SMM.
A. Scope and Objective of the Plan. The implementation plan need not be complex or excessively detailed, but should provide a basic roadmap to meet the overall objective stated in the SMS Framework to, “…develop and implement an integrated, comprehensive SMS for [the] entire organization.” 1) The SMS Implementation Plan. May consist of more than one document, details the actions to be taken, by whom and within what time-frame. The implementation plan can be created in any format that is useful to the company but should provide at least the following:
• Component/element/process reference from the SMS Assurance Guide or SMS Framework, • Brief description of the actions to be taken and manual(s) affected, • Responsible organization and/or individual(s), and • Expected completion date.
2) The Implementation Plan. The Implementation Plan should span the entire SMS development process. It should start at preparation and organization, and continue through all levels of maturity. It should be updated as necessary (along with the detailed gap analysis) as the projects progress. At each level, top management’s approval of the implementation plan must include allocation of necessary resources IAW element 1.2.
Section 1. SMS Design and Performance Assessmentsp.55
2.4.1.1. INTRODUCTION. This Safety Management System (SMS) assessment guidance has been developed to aid in the assessment of the design and performance of aviation organizations’ SMS programs in order to ensure that they comply with the SMS requirements specified in General Authority of Civil Aviation Regulation (GACAR) Part 5.
A. SMS assessments are important and necessary activities to ensure that: • The SMS has been designed and implemented to meet the design expectations (Design Assessments) • The SMS is meeting the performance objectives (Performance Assessments) B. The SMS assessment guidance contained in this chapter is intended to be used by anyone assessing an SMS with GACAR Part 5 requirements, including:
• General Authority of Civil Aviation (GACA) aviation safety inspectors (Inspectors) assessing the SMS of an aviation organization regulated under GACAR Part 5 • Aviation organizations conducting their own assessments (i.e. internal audits and evaluations) • Third party assessors such as Industry Associations or Consultant Auditors working on behalf of the aviation organizations C. The SMS assessment guidance in this chapter can also be used by aviation organizations to compare their current processes and procedures with potential or needed processes and procedures.
This activity is called a “Gap Analysis” and is a requirement for existing certificate holders who are implementing SMS requirements under the SMS Phased Implementation program described in
Chapter 3 of this Volume.p.55
2.4.1.3. THE SMS ASSESSMENT GUIDE. An SMS Assessment Guide is included as Appendix A to this chapter. A. Organization. The SMS Assessment Guide is organized according to the SMS Framework which is presented in Chapter 2 of this Volume. Each component, element and process of an SMS is included in the SMS Assessment Guide. The SMS Assessment Guide has been structured so that assessors can either select certain specific components, elements or processes for assessment or they can use the entire Guide to assess the entire SMS.
B. How to Use the SMS Assessment Guide. For each required component, element and process, the SMS Assessment Guide includes: 1) A brief statement of the performance objective. 2) A series of questions that are used to assess (i.e. evaluate) whether the design expectations have been met.
NOTE: This evaluation of design expectations is called the Design Assessment. 3) A “bottom line assessment” question to address whether the performance objective has been achieved. NOTE: This evaluation of achieved performance is called the Performance Assessment.
Recall from the SMS framework described in detail in Chapter 2 that: • Performance Objectives represent the objective outcomes needed for the particular SMS Framework element or process under evaluation. In other words, at a minimum, what should the aviation organization expect this element or process to do? • Design Expectations represent organizational structure and characteristics that, if properly implemented, should provide the system outcomes identified in the performance objectives. In other words, what might the organization do to get this element or process to perform the way it should? Assessors should ask each question that pertains to the component, element or process under review and document their observations. From these assessments, it is possible to determine whether the SMS is meeting the minimum standards as specified in GACAR Part 5. Chapter 5 of this Volume has further details on the process for making final determinations whether the SMS is acceptable or not.
C. Key Attributes. The questions associated with the assessment of design expectations (i.e. Design Assessments) are organized according to several key attributes associated with well-designed systems and processes. The key attributes are described as:
1) (R)—Responsibility. Who is accountable for management and overall quality of the process (planning, organizing, directing, controlling) and its ultimate accomplishment. 2) (A)—Authority. Who can direct, control, or change the process, as well as who can make key decisions such as risk acceptance. This attribute also includes the concept of empowerment.
3) (P)—Procedures. ISO-9000-2000 defines “procedure” as “a specified way to carry out an activity or a process”. Procedures translate the “what” in goals and objectives into “how” in practical activities (things people do).
4) (C)—Controls. In this context, controls are elements of the system, including hardware, software, special procedures or procedural steps, and supervisory practices designed to keep processes on track to achieve their intended results.
5) (I)—Interfaces. This aspect includes examining such things as lines of authority between departments, lines of communication between employees, consistency of procedures, and clearly delineating lines of responsibility between organizations, work units, and employees.
Interfaces are the “Inputs” and “Outputs” of a process. 6) (PM)—Process Measures. Ways to provide feedback to responsible parties that required actions are taking place, required outputs are being produced, and expected outcomes are being achieved.
D. Usage. The SMS Assessment Guide can be used for both: 1) Initial Design Assessment (DA) during SMS implementation. 2) Performance Assessments (PA) to measure the ongoing performance and effectiveness of an operating SMS.
E. The GACA views the objectives and expectations in the SMS Framework as the minimum standards for an SMS. Aviation organizations regulated under GACAR Part 5 must develop and implement an SMS that meets these minimum standards if they wish to have their certificate/authorization remain valid. The SMS Assessment Guide is the tool to be used to assess the degree of implementation of these minimum requirements.
NOTE: Aviation organizations may establish more or stricter requirements. 2.4.1.5. Designation of the Accountable Executive. A. GACAR §5.25 stipulates the regulations for the designation of the Accountable Executive as well as his responsibilities.
B. To assure the appropriate implementation of §5.25, the stipulated procedure in Appendix-B to this chapter will be followed by GACA inspectors.
COMPONENT 1.0 - SAFETY POLICY AND OBJECTIVESp.59
Component Performance Objective The organization will develop and implement an integrated, comprehensive SMS for its organization and will incorporate a procedure to identify and maintain compliance with all applicable regulatory statutory requirements.
Design Expectations Management Accountability Does the organization clearly identify who is responsible for the quality of the organizational management processes (name, position, organization)? Do procedures also define who is responsible for accomplishing the process? Procedure: Scope Does the organization’s SMS include the complete scope and life cycle of the organization’s systems? Procedure: Management Does the organization require the SMS processes to be - • Documented? • Monitored? • Measured? • Analyzed? Procedure: Promotion of Positive Safety Culture Does the organization promote a positive safety culture as in Safety Promotion Component 4.0? Procedure: Quality Policy Does top management ensure that the organization’s quality policy, if present, is consistent with (or not in conflict with) its SMS? Procedure: Safety Management Planning Does the organization establish and maintain measurable criteria that accomplish the objectives of its Safety Policy? Design Expectations Does the organization establish and maintain a safety management plan to describe methods for achieving the safety objectives set forth in its Safety Policy? Procedure: Regulatory Compliance Does the organization identify all current and forthcoming GACA regulatory requirements? Does the organization ensure the SMS complies with all applicable regulatory requirements? Outputs and Measures Does the organization ensure all SMS outputs are - • Recorded? • Monitored? • Measured? • Analyzed? Does the organization periodically measure performance objectives and design expectations of the general Safety Policy Component? Controls Does the organization establish and maintain supervisory and operational controls to ensure procedures are followed for safety-related operations and activities? Bottom Line Assessment Has the organization developed and implemented an integrated, comprehensive SMS for its entire organization and incorporated a procedure to identify and maintain compliance with current safety- related, regulatory, and other requirements?
ELEMENT 1.1 - SAFETY POLICYp.60
Performance Objective Top management will define the organization’s Safety Policy and convey its expectations and objectives to its employees. Design Expectations Management Accountability Does top management define the organization’s Safety Policy? Procedure Does the organization’s Safety Policy include the following - • A commitment to implement and maintain the SMS? Design Expectations • A commitment to continuously improve the level of safety? • A commitment to managing safety risk? • A commitment to comply with all applicable regulatory requirements? • A commitment to encourage employees to report safety issues without reprisal, as per SMS Framework Employee Reporting and Feedback System Process 3.1.6? • Clear standards for acceptable behavior for all employees? Is the Safety Policy documented? Outputs and Measures Does the Safety Policy provide guidance to management on setting safety objectives? Does the Safety Policy provide guidance to management on reviewing safety objectives? Does the organization ensure the Safety Policy is communicated, with visible management endorsement, to all employees and responsible parties? Does the organization ensure the Safety Policy is reviewed periodically to verify it remains relevant and appropriate to the organization? Does the organization the Safety Policy ensures the reflection of organizational commitment regarding safety, including the promotion of a positive safety culture? Does the organization identify and communicate management and individuals’ safety performance responsibilities? The organization will periodically measure performance objectives and design expectations of the Safety Policy Element.
Bottom Line Assessment Has top management defined the organization’s Safety Policy and conveyed the expectations and objectives of that policy to its employees?
ELEMENT 1.2 - MANAGEMENT COMMITMENT AND SAFETY ACCOUNTABILITIESp.61
Performance Objective The organization will define, document, and communicate the safety roles, responsibilities, and authorities throughout its organization. Design Expectations Design Expectations Management Accountability Does the organization ensure top management has the ultimate responsibility for the SMS? Does the organization’s top management provide the resources needed to implement and maintain the SMS? Does the organization define levels of management that can make safety risk acceptance decisions as described in Component 2.0, D)? Procedure/Output/Measure Does the organization ensure that aviation safety-related positions, responsibilities, and authorities are - • Defined? • Documented? • Communicated throughout the organization? Does the organization periodically measure performance objectives and design expectations of the Management Commitment and Safety Accountabilities Element? Bottom Line Assessment Has the organization defined, documented, and communicated the safety roles, responsibilities, and authorities throughout the organization?
ELEMENT 1.3 - KEY SAFETY PERSONNELp.62
Performance Objective The organization will appoint a safety manager to manage, monitor and coordinate the SMS processes throughout its organization. Design Expectations Management Responsibility/Procedure Did top management appoint a member of management who, irrespective of other responsibilities, will be responsible for and authorized to - • Ensure that SMS processes are established, implemented, and maintained? • Report to top management on the performance of the SMS and what needs to be improved? • Ensure the organization communicates its safety requirements throughout the organization? Is the appointed safety manager independent of any operational responsibilities to avoid any potential conflict of interest? Design Expectations Outputs and Measures Does the organization ensure that Key Safety Personnel positions, responsibilities, and authorities are communicated throughout the organization? The organization will periodically measure performance objectives and design expectations of the Key Safety Personnel Element 1.3.
Bottom Line Assessment Has the organization appointed a safety manager to manage, monitor and coordinate the SMS processes throughout its organization?
ELEMENT 1.4 - EMERGENCY PREPAREDNESS AND RESPONSEp.63
Performance Objective The organization will develop and implement procedures that it will follow in the event of an accident, incident or operational emergency to mitigate the effects of these events. Design Expectations Management Responsibility Does the organization clearly identify who is responsible for the quality of the Emergency Preparedness and Response Process and associated documentation? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization establish procedures across all operational departments as expected in Safety Policy and Objectives Component 1.0 to - •Identify hazards which have potential for accidents, incidents or operational emergencies? •Coordinate and plan the organization’s response to accidents, incidents or operational emergencies? •Execute periodic exercises of the organization’s emergency response procedures? Outputs and Measures Design Expectations Does the organization:
• Identify interfaces between the emergency response functions of different operational elements of the organization?; and • Periodically measure performance objectives and design expectations of the Emergency Preparedness and Response Element? Bottom Line Assessment Has the organization developed and implemented procedures that it will follow in the event of an accident, incident or operational emergency to mitigate the effects of these events?
ELEMENT 1.5 - SMS DOCUMENTATION AND RECORDSp.64
Performance Objective The organization will have documented safety policies, objectives, procedures, a document/record management process, and a management plan that meet organizational safety expectations and objectives.
Design Expectations Management Responsibility Does the organization clearly identify who is responsible for the quality of the Documentation and Records Process? Do procedures define who is responsible for accomplishing the process? Procedure: Document Contents Does the organization establish and maintain, in paper or electronic format, information to describe the following - • Safety policies? • Safety objectives? • SMS expectations? • Safety procedures and processes? • The scope of SMS implementation? • Accountabilities, responsibilities and authorities for safety-related procedures and processes? Interactions and interfaces between safety-related procedures and policies? • SMS outputs? Procedure: Document Quality Design Expectations Does the organization require all documentation be - • Legible? • Dated (with the dates of revisions)? • Readily identifiable? • Maintained in an orderly manner? • Retained for a specified period as determined by the organization? Procedure: Document Management Does the organization control all documents to ensure - • They are easily located? • They are periodically reviewed? • They are revised as needed? • Authorized personnel approve them for adequacy? Does the organization ensure that all current document versions are available at all locations where essential SMS operations are performed? Does the organization ensure that obsolete documents are either removed as soon as possible, or that they are not used accidentally? Outputs and Measures Has the organization maintained their safety management plan in accordance with the objectives and expectations contained within this Element? Does the organization ensure SMS records are - • Identified? • Maintained? • Disposed of? • Legible? • Easy to identify? • Traceable to the activity involved? • Easy to find? • Protected against damage? • Protected against deterioration? • Protected against loss? • Annotated with record retention times? Does the organization periodically measure performance objectives and design expectations of the Documentation and Records Element? Bottom Line Assessment Has the organization clearly defined and documented (in paper or electronic format) safety policies, objectives, The scope of SMS implementation, Accountability, responsibilities, and authorities for SMS processes, procedures, and document/record maintenance processes and established, implemented, and maintained a safety management plan that meets the safety expectations and objectives? Component 2.0 - Safety Risk Management Flow Diagram The Safety Risk Management flow diagram (below) is annotated with the Framework element/process numbers and other notes. They will help the user visualize the Framework in terms of a process flow with attending interfaces and perhaps more clearly understand the component/element/process expectations.
COMPONENT 2.0 - SAFETY RISK MANAGEMENTp.67
Component Performance Objective The organization will develop processes to understand the critical characteristics of its systems and operational environment and apply this knowledge to identify hazards, analyze and assess risk and design risk controls.
Design Expectations Input Does the organization identify inputs (interfaces) for this Component obtained from the critical expectations of its systems and operational environment? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Safety Risk Management Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization’s SMS, at a minimum, include the following processes - • System description and task analysis? • Hazard Identification? • Safety Risk Analysis? • Safety Risk Assessment? • Safety Risk Control and Mitigation? Does the organization’s SMS processes apply to - • Initial designs of systems, organizations, and/or products? • Hazards that are identified in the safety assessment functions (described in Safety Assurance Component 3.0? • Planned changes to operational processes? Does the organization establish feedback loops between assessment functions described in the Continuous Monitoring Process 3.1.1 to evaluate the effectiveness of safety risk controls? Does the organization define acceptable and unacceptable levels of safety risk (for example, does the organization have a safety risk matrix)? Does the organization’s safety risk acceptance process include descriptions of the following - • Severity levels? • Likelihood levels? • Level of management that can make safety risk acceptance decisions in accordance with Element 1.2? Design Expectations Does the organization define acceptable risk for hazards that will exist in the short-term while safety risk control/mitigation plans are developed and implemented? Outputs and Measures Does the organization:
• Identify interfaces between the Safety Risk Management Component (this Component) and the Safety Assurance Component (3.0)?; and • Periodically measure performance objectives and design expectations of the safety risk management component? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities?; and • They periodically review supervisory and operational controls to ensure the effectiveness of the Safety Risk Management Component (2.0)? Bottom Line Assessment Has the organization developed processes to understand the critical characteristics of its systems and operational environment and applied this knowledge to the identification of hazards, risk analysis and risk assessment, and the design of risk controls?
ELEMENT 2.1 - HAZARD IDENTIFICATION AND ANALYSISp.69
Process 2.1.1 System Description and Task Analysis Performance Objective The organization will describe and analyze its systems, operations, and operational environment to gain an understanding of critical design and performance factors, processes, and activities to identify hazards.
Design Expectations Input Are inputs (interfaces) for the System Description and Task Analysis process obtained from the Safety Risk Management Component 2.0? Management Responsibility Design Expectations Does the organization clearly identify who is responsible for the quality of the System Description and Task Analysis Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization develop system descriptions and task analysis to the level of detail necessary to - • Identify hazards? • Develop operational procedures? • Develop and implement risk controls? Outputs and Measures Does the organization:
• Identify interfaces between the system description and task analysis function (this process) and the Hazard Identification Process 2.1.2 below?, and • Periodically measure performance objectives and design expectations of the System Description and Task Analysis Process (2.1.1)? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities?, and • They periodically review supervisory and operational controls to ensure the effectiveness of the System Description and Task Analysis Process (2.1.1)? Bottom Line Assessment Has the organization analyzed its systems, operations and operational environment to gain an understanding of critical design and performance factors, processes, and activities to identify hazards?
ELEMENT 2.1 - HAZARD IDENTIFICATION AND ANALYSISp.70
Process 2.1.2 - Identify Hazards Performance Objective The organization will identify and document the hazards in its operations that are likely to cause death, serious physical harm, or damage to equipment or property in sufficient detail to determine associated level of risk and risk acceptability.
Design Expectations Input Are inputs (interfaces) for the Hazard Identification Process obtained from the System Description and Task Analysis Process 2.1.1, to include a new hazard identified from the Safety Assurance Component 3.0, failures of risk controls due to design deficiencies found in the System Assessment Process 3.1.8 and/or from any other source? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Hazard Identification Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization identify hazards for the entire scope of each system, as defined in the system description? Note:
While it is recognized that identification of every conceivable hazard is impractical, aviation organizations are expected to exercise due diligence in identifying and controlling significant and reasonably foreseeable hazards related to their operations.
Does the organization document the identified hazards? Does the organization have a means of tracking hazard information? Does the organization manage hazard information through the entire Safety Risk Management Process? Outputs and Measures Does the organization:
• Identify interfaces between this process and the Analysis of Safety Risk Process (2.2.1)? • Periodically measure performance objectives and design expectations of the Hazard Identification Process? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities?, and • They periodically review supervisory and operational controls to ensure the effectiveness of the Hazard Identification Process? Bottom Line Assessment Has the organization identified and document the hazards in its operations that are likely to cause death, serious physical harm, or damage to equipment or property in sufficient detail to determine associated level of risk and risk acceptability?
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.72
Process 2.2.1 Analyze Safety Risk Performance Objective The organization will determine and analyze the severity and likelihood of potential events associated with identified hazards and will identify risk factors associated with unacceptable levels of severity or likelihood.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Hazard Identification Process (2.1.2)? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Safety Risk Analysis Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization’s safety risk analysis functions include - • Analysis of existing safety risk controls? • Triggering mechanisms? • Safety risk of a reasonably likely outcome from the existence of a hazard? Does the organization’s reasonably likely outcomes from the existence of a hazard, include estimations of the following - • Likelihood? Design Expectations • Severity? Outputs and Measures Does the organization:
• Identify interfaces between the risk analysis functions (this process) and the Risk Assessment Process (2.2.2)? • Periodically measure performance objectives and design expectations of the Risk Analysis Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Analysis of Safety Risk Process? Bottom Line Assessment Has the organization determined and analyzed the factors related to the severity and likelihood of potential events associated with identified hazards and identified factors associated with unacceptable levels of severity or likelihood?
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.73
Process 2.2.2 Assess Safety Risk Performance Objective The organization will assess risk associated with each identified hazard and define risk acceptance procedures and levels of management that can make safety risk acceptance decisions.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Safety Risk Analysis Process 2.2.1 in terms of estimated severity and likelihood? Management Responsibility Design Expectations Does the organization clearly identify who is responsible for the quality of the Safety Risk Assessment Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization analyze each hazard for its safety risk acceptability using their safety risk acceptance process as described in the SMS Framework Safety Risk Management Component 2.0? Outputs and Measures Does the organization:
• Identify interfaces between the risk assessment functions (this process) and the Control/Mitigate Safety Risk Process 2.2.3? • Periodically measure performance objectives and design expectations of the Safety Risk Assessment Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Safety Risk Assessment Process?.
Bottom Line Assessment Has the organization assessed risk associated with each identified hazard and defined risk acceptance procedures and levels of management that can make safety risk acceptance decisions?
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.74
Process 2.2.3 Control/Mitigate Safety Risk Performance Objective The organization will design and implement a risk control for each identified hazard for which there is an unacceptable risk, to reduce risk to acceptable levels. The potential for residual risk and substitute risk will be analyzed before implementing risk controls.
NOTE: Although Process 2.2.3 is very similar to the Preventive/Corrective Action Process 3.3.1, the primary differences are: • Process 2.2.3 is used during the design of a system (often looking to the future) or in the redesign of a non-performing system where system requirements are being met, however the system is not producing the desired results.
• Process 2.2.3 is also used when new hazards are discovered during the safety assessment process that was not taken into account during initial design. • Process 3.3.1 is used to develop actions to bring a non-performing system back into conformance to its design requirements.
Design Expectations Input Are inputs (interfaces) for the Control/Mitigation Safety Risk process obtained from the Safety Risk Assessment Process 2.2.2? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Control/Mitigate Safety Risk Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization have a safety risk control/mitigation plan for each hazard with unacceptable risk? Are the organization’s safety risk controls - • Clearly described? • Evaluated to ensure that the expectations have been met? • Ready to be used in their intended operational environment? • Documented? Does the organization ensure that substitute risk will be evaluated when creating safety risk controls and mitigations? Outputs and Measures
Does the organization:p.75
• Identify interfaces between the risk control/mitigation functions (this process) and the Safety Assurance Component 3.0, specifically 3.1.1 thru 3.1.6, below? • Periodically measure performance objectives and design expectations of the risk Control/Mitigate Safety Risk Process? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the safety risk control process? Bottom Line Assessment Has the organization designed and implemented a risk control for each identified hazard for which there is unacceptable risk, to reduce to acceptable levels the potential for death, serious physical harm, or damage to equipment or property? Has the residual or substitute risk been analyzed before implementing any risk control? Component 3.0: Safety Assurance Flow Diagram
COMPONENT 3.0 - SAFETY ASSURANCEp.77
Component Performance Objective The organization will monitor, measure, and evaluate the performance of their systems to identify new hazards, measure the effectiveness of risk controls, (to include preventative and corrective actions) and ensure compliance with regulatory requirements.
Design Expectations Input Are inputs (interfaces) for this component will be obtained from the Safety Risk Management Component 2.0? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Safety Assurance Component? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization monitor their systems and operations to:
• Identify new hazards? • Measure the effectiveness of safety risk controls? • Ensure compliance with regulatory requirements applicable to the SMS? Is the organization’s safety assessment function based upon a comprehensive system description and task analysis as described in Process 2.1.1, System Description and Task Analysis? Does the organization collect the data necessary to demonstrate the effectiveness of its – • Operational processes? • The SMS? Outputs and Measures • Does the organization identify interfaces between the data acquisition processes (3.1.1 to 3.1.6) and the system assessment process (2.2.2)? • The hazard identification process (2.1.2)? Does the organization periodically measure performance objectives and design expectations of the Safety Assurance Component? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities?, and • They periodically review supervisory and operational controls to ensure the effectiveness of the Safety Assurance Component? Bottom Line Assessment Has the organization monitored, measured, and evaluated the performance of their systems to identify new hazards, measure the effectiveness of risk controls, (to include preventative and corrective actions) and ensured compliance with regulatory requirements?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.79
Process 3.1.1 - Continuous Monitoring Performance Objective The organization will monitor operational data, including products and services received from contractors, to identify hazards, measure the effectiveness of safety risk controls, and assess system performance.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Risk Assessment Process 2.2.2, Risk Control/Mitigation Process 2.2.3, System Assessment Process 3.1.8 or Preventive/Corrective Action Process 3.3.1? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Continuous Monitoring Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization monitor operational data (e.g., duty logs, crew reports, work cards, process sheets, and reports from the employee safety feedback system specified in Process 3.1.6) to:
• Determine whether it conforms to safety risk controls (described in Process 2.2.3)? • Measure the effectiveness of safety risk controls (described in Process 2.2.3)? • Assess SMS system performance? Design Expectations • Identify hazards? Does the organization monitor products and services from contractors? Outputs and Measures Does the organization:
• Identify interfaces between these continuous monitoring functions and the Analysis of Data Process 3.1.7? • Periodically measure performance objectives and design expectations of the Continuous Monitoring Process? Bottom Line Assessment Has the organization monitored operational data, including products and services received from contractors, to identify hazards, measure the effectiveness of safety risk controls, and assess system performance?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.80
Process 3.1.2 - Internal Audits by Operational Departments Performance Objective The organization will perform regularly scheduled internal audits of its operational processes, including those performed by contractors, to verify safety performance and evaluate the effectiveness of safety risk controls.
Design Expectations Input Are inputs (interfaces) for this component obtained from the Safety Risk Management Component 2.0? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Safety Assurance Component? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization monitor their systems and operations to:
• Identify new hazards? • Measure the effectiveness of safety risk controls? Design Expectations • Ensure compliance with regulatory requirements applicable to the SMS? Is the organization’s safety assessment function based upon a comprehensive system description and task analysis as described in Process 2.1.1, System Description and Task Analysis? Does the organization collect the data necessary to demonstrate the effectiveness of its - • Operational processes? • The SMS? Outputs and Measures Does the organization identify interfaces between the data acquisition processes (3.1.1 to 3.1.6) • The system assessment process (2.2.2)? • The hazard identification process (2.1.2)? Does the organization periodically measure performance objectives and design expectations of the Safety Assurance Component? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities?, and • They periodically review supervisory and operational controls to ensure the effectiveness of the Safety Assurance Component? Bottom Line Assessment Has the organization monitored operational data, including products and services received from contractors, to identify hazards, measure the effectiveness of safety risk controls, and assess system performance?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.81
Process 3.1.3 - Internal Evaluation Performance Objective The organization will conduct internal evaluations of the SMS and operational processes at planned intervals to determine that the SMS conforms to its objectives and expectations.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Risk Assessment Process 2.2.2 or Control/Mitigate Safety Risk Process 2.2.3? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Internal Evaluation Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization ensure internal evaluations of operational processes and the SMS are conducted at planned intervals, to determine that the SMS conforms to objectives and expectations? Note:
Sampling of SMS output measurement is a primary control under Component 1.0. Does the organization’s planning of the internal evaluation program take into account - • Safety criticality of the processes being evaluated? • Results of previous evaluations? Procedure: Program Contents Does the organization define what an evaluation is? Does the definition for evaluations include information about evaluation - • Criteria? • Scope? • Frequency? • Methods? • Processes used to select the evaluators? Procedure: Documentation Does the organization’s document procedures include - • Evaluation responsibilities? • Requirements for - • Planning evaluations? • Conducting evaluations? • Reporting results? • Maintaining records? • Evaluating contractors and vendors? Procedure: Scope Design Expectations Does the organization’s evaluation program include an evaluation of the operational departments described in SMS Framework Safety Policy Component 1.0? Procedure: Independence of Evaluators Does the organization ensure the person or organization performing evaluations of operational processes are independent of the process being evaluated? Outputs and Measures Does the organization:
• Identify interfaces between this process and the Analysis of Data Process 3.1.7? • Periodically measure performance objectives and design expectations of the Internal Evaluation Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Internal Evaluation Process? Bottom Line Assessment Has the organization conducted internal evaluations of the SMS and operational processes at planned intervals to determine that the SMS conforms to its objectives and expectations?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.83
Process 3.1.4 - External Auditing of the SMS Performance Objective The organization will include the results of assessments performed by oversight organizations, and other external audit results, in its data analysis.
Design Expectations Input Design Expectations Are inputs (interfaces) for this process obtained from the Control/Mitigate Safety Risk Process 2.2.3 and from the GACA and/or other external agencies? Management Responsibility Does the organization clearly identify who is responsible for the quality of the External Auditing Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization ensure it includes the results of oversight organization audits, and other external audit results, in the analyses conducted under SMS Framework Analysis of Data Process 3.1.7? Outputs and Measures The organization will:
Identify interfaces between this process and the Analysis of Data Process 3.1.7? Periodically measure performance objectives and design expectations of the External Auditing Process? Controls Does the organization ensure that:
Procedures are followed for safety-related operations and activities? They periodically review supervisory and operational controls to ensure the effectiveness of the External Auditing Process? Bottom Line Assessment Has the organization included the results of audits performed by oversight organizations, and other external audit results, in its analysis of data?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.84
Process 3.1.5 - Investigation Performance Objective The organization will establish procedures to collect data and investigate incidents, accidents, and instances of potential regulatory non-compliance to identify potential new hazards or risk control failures.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Control/Mitigate Safety Risk Process 2.2.3 and as needed upon occurrence of events? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Investigation Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization ensure it collects data on - • Incidents? • Accidents? • Potential regulatory non-compliance?) Does the organization ensure that procedures are established to investigate - • Accidents? • Incidents? • Instances of potential regulatory non-compliance? Outputs and Measures Does the organization:
• Identify interfaces between this process and the Analysis of Data Process 3.1.7? • Periodically measure performance objectives and design expectations of the Investigation Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Investigative Process? Bottom Line Assessment Has the organization established procedures to collect data and investigate incidents, accidents, and instances of potential regulatory non-compliance that occur to identify potential new hazards or risk control failures?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.86
Process 3.1.6 - Employee Reporting and Feedback System Performance Objective The organization will establish and maintain mandatory, voluntary and confidential Employee Safety Reporting and Feedback Systems. Data obtained from this system will be monitored to identify emerging hazards and to assess performance of risk controls in the operational systems.
Design Expectations Input Are inputs (interfaces) for the Employee Reporting and Feedback System obtained from employees? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Employee Reporting and Feedback Process? Do procedures define who is responsible for accomplishing the process? Procedure Has the organization established and maintained a mandatory, voluntary and a confidential Employee Reporting and Feedback System as in the Safety Promotion component? Does the organization ensure employees are encouraged to use the Safety Reporting and Feedback Systems without fear of reprisal and to encourage submission of solutions/safety improvements where possible? Does the organization ensure data from the Safety Reporting and Feedback System is monitored to identify emerging hazards? Does the organization ensure the data collected in the Employee Reporting and Feedback System is included in the analyses conducted under SMS Framework Analysis of Data Process 3.1.7? Outputs and Measures Does the organization:
• Identify interfaces between this process and the Analysis of Data Process 3.1.7? • Periodically measure performance objectives and design expectations of the Employee Reporting and Feedback Process? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Employee Reporting and Feedback Process? Bottom Line Assessment Has the organization established and maintained a Confidential Employee Safety Reporting and Feedback System? Are the data obtained from this system monitored to identify emerging hazards and to assess performance of risk controls in the operational systems?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.87
Process 3.1.7 - Analysis of Data Performance Objective The organization will analyze the data described in SMS Framework Processes 3.1.1 through 3.1.6, to assess the risk controls’ performance and effectiveness in the organization’s operational processes and the SMS, and to identify root causes of deficiencies and potential new hazards.
Design Expectations Input Are inputs (interfaces) for this process obtained from the data acquisition processes 3.1.1 through 3.1.6? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Analysis of Data Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization analyze the data that it collects to demonstrate the effectiveness of - • Risk controls in the organization’s operational processes (SMS Framework Safety Policy Component? • The organization’s SMS? Does the organization ensure it analyzes the data it collects to identify root causes of deficiencies and potential new hazards and evaluate where improvements can be made in the organization’s - Design Expectations • Operational processes? • The SMS? Outputs and Measures Does the organization:
• Identify interfaces between this process and the System Assessment Process 3.1.8? • Periodically measure performance objectives and design expectations of the Analysis of Data Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Analysis of Data Process? Bottom Line Assessment Has the organization analyzed the data described in SMS Framework Processes 3.1.1 through 3.1.6 to assess the risk controls’ performance and effectiveness in the organization’s operational processes and the SMS and to identify root causes of deficiencies and potential new hazards?
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.88
Process 3.1.8 - System Assessment Performance Objective The organization will perform an assessment of the safety performance and effectiveness of risk controls, conformance to SMS expectations as stated herein, and the objectives of the safety policy.
Design Expectations Input Are inputs (interfaces) for this process obtained from the Analysis of Data Process 3.1.7. Management Responsibility Design Expectations Does the organization will clearly identify who is responsible for the quality of the System Assessment Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization assess the performance and effectiveness of the - • Safety-related functions of operational processes (Safety Policy Component) against their requirements? • SMS against its objectives and expectations? Does the organization record system assessments that result in a finding of - • Conformity or nonconformity with existing safety risk controls and/or SMS expectations, including regulatory requirements? • New hazards found? Outputs and Measures Does the organization use the Safety Risk Management (Component 2.0) if risk assessment and risk control performance indicates - • That new hazards or potential hazards have been found? • That the system needs to be changed? Does the organization maintain records of assessments in accordance with the requirements of SMS Documentation and Records Element 1.5? Does the organization identify interfaces between the system assessment function and - • The hazard identification function (2.1.2, Identify Hazards Element)? • The preventive and corrective action function (3.3.1, Preventive/Corrective Action Element)? Does the organization periodically measure performance objectives and design expectations of the System Assessment Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the System Assessment Process? Bottom Line Assessment Has the organization assessed risk controls’ performance and effectiveness, conformance with SMS requirements, and the objectives of the Safety Policy?
ELEMENT 3.2 - MANAGEMENT OF CHANGEp.90
Performance Objective The organization’s management will identify and determine acceptable safety risks for changes within the organization that may affect established processes and services by the introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to existing system designs, new operations or procedures or modifications to existing operations or procedures.
Design Expectations Input Are inputs (interfaces) for this process obtained from the proposed introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to systems, processes or procedures? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Management of Change Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization ensure it does not implement any of the following until the level of safety risk of each identified hazard is determined to be acceptable for - • Introduction of new technology or equipment? • Changes in the operating environment? • Changes in key personnel? • Significant changes in staffing levels? Design Expectations • Changes in safety regulatory requirements? • Significant restructuring of the organization? • Physical changes such as operational office set up, aerodrome, and ground facilities? • Changes to existing system designs? • New operations or procedures? • Modifications to existing operations or procedures? Outputs and Measures Does the organization:
• Ensure that this process is interfaced with the SRM process (System Description and Task Analysis 2.1.1)? • Ensure that Preliminary Safety Analysis (PSA) is produced and they consider
TEPIOILQC?p.91
• Ensure that Preliminary Hazards Lists (PHLs) are produced. • Operation Readiness case (ORC) is produced. • Periodically measure performance objectives and design expectations of the Management of Change Process? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • Preliminary Safety Analysis (PSA) is produced and they considered TEPIOILQC • Preliminary Hazards Lists (PHLs) are produced. • Operation Readiness case (ORC) is produced.
• They periodically review supervisory and operational controls to ensure the effectiveness of the Management of Change Process. Bottom Line Assessment Has the organization’s management assessed risk for changes within the organization that may affect established processes and services by the introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to existing system designs, new operations or procedures or modifications to existing operations or procedures.
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.92
Performance Objective The organization will promote continuous improvement of its SMS through recurring application of SRM (Component 2.0), SA (Component 3.0), and by using safety lessons learned and communicating them to all personnel.
Design Expectations Input Are inputs (interfaces) for this process obtained through continuous application of Safety Risk Management (Component 2.0), Safety Assurance (Component 3.0) and the outputs of the SMS, including safety lessons learned? Management Responsibility Design Expectations Does the organization clearly identify who is responsible for the quality of the Continual Improvement Process? Do procedures also define who is responsible for accomplishing the process? Procedure Does the organization continuously improve the effectiveness of the SMS and of safety risk controls through the use of the safety and quality policies, objectives, audit and evaluation results, analysis of data, corrective and preventive actions, and management reviews? Does the organization develop safety lessons learned? and - • Use safety lessons learned to promote continuous improvement of safety? • Ensure that safety lessons learned are communicated to all personnel? Outputs and Measures Does the organization:
• Ensure that trend analysis of safety and quality policies, objectives, audit and evaluation results, analysis of data, and corrective and preventive actions are interfaced with Management Review Process 3.3.2? • Periodically measure performance objectives and design expectations of the Continual Improvement Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Continuous Improvement Process? Bottom Line Assessment Has the organization promoted continuous improvement of its SMS through recurring application of Safety Risk Management (Component 2.0), Safety Assurance (Component 3.0), and by using safety lessons learned and communicating them to all personnel?
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.93
Process 3.3.1 Preventive/Corrective Action Performance Objective The organization will take preventive and corrective action to eliminate the causes or potential causes of nonconformance identified during analysis, to prevent recurrence.
NOTE: Although Process 2.2.3 (Control/Mitigate Safety Risk) is very similar to Process 3.3.1, the primary differences are: • Process 2.2.3 is used during the design of a system (often looking to the future) or in the redesign of a non-performing system where system requirements are being met, but the system is not producing the desired results.
• Process 2.2.3 is also used where new hazards are discovered during Safety Assurance that was not taken into account during initial design. • Process 3.3.1 is used to develop actions to bring a non-performing system back into conformance to its design requirements.
Design Expectations Inputs Are inputs (interfaces) for this process obtained from System Assessments (Process 3.1.8) with findings of non-performing risk controls? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Preventive/Corrective Action Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization develop the following - • Preventive actions for identified potential nonconformities with risk controls? • Corrective actions for identified nonconformities with risk controls? Does the organization consider safety lessons learned in the development of - • Preventive actions? • Corrective actions? Does the organization take necessary preventive and corrective action based on the findings of investigations? Does the organization prioritize and implement preventive and corrective actions in a timely manner? Design Expectations Outputs and Measures Does the organization keep and maintain records of the disposition and status of preventive and corrective actions according to established record retention policy? Does the organization:
• Identify interfaces between this process and the Continuous Monitoring Process 3.1.1? • Periodically measure performance objectives and design expectations of the Preventive and Corrective Action Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Preventive and Corrective Action Process? Bottom Line Assessment Has the organization taken preventive or corrective actions to eliminate the causes of non- conformances, identified during analysis, to prevent recurrence?
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.95
Process 3.3.2 - Management Review Performance Objective Top management will conduct regular reviews of the SMS to assess the performance and effectiveness of an organization’s operational processes and the need improvements.
Design Expectations Input Design Expectations Are inputs (interfaces) for this process obtained from the outputs of Safety Risk Management (Component 2.0) and Safety Assurance (Component 3.0) activities including – • Hazard identification (Process 2.1.2)? • Risk analysis (severity and likelihood) (Process 2.2.1)? • Risk assessments (Process 2.2.2)? • Risk control/mitigation plans (Process 2.2.3)? • Results of analysis of data (Process 3.1.7)? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Management Review Process? Do procedures define who is responsible for accomplishing the process? Procedure Does top management conduct regular reviews of the SMS, including the outputs of the Safety Risk Management Processes, the outputs of the Safety Assurance Processes, and safety lessons learned? Does top management include in its reviews of the SMS, an assessment of the need for improvements to the organization’s operational processes and the SMS? Outputs and Measures Does the organization keep records of the disposition and status of management reviews according to the organization’s record retention policy? Does the organization:
• Identify interfaces between this process and the Hazard Identification Process (2.1.2, above) and Preventive and Corrective Action Process (3.3.1)? • Periodically measure performance objectives and design expectations of the Management Review Process? Controls Design Expectations Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Management Review Process? Bottom Line Assessment Has top management conducted regular reviews of the SMS, including outputs of Safety Risk Management (Component 2.0), Safety Assurance (Component 3.0), and lessons learned? Has management reviews included assessing the performance and effectiveness of an organization’s operational processes and the need for improvements?
COMPONENT 4.0 - SAFETY PROMOTIONp.97
Component Performance Objective Top management will promote the growth of a positive safety culture and communicate it throughout the organization. Design Expectations Input Are inputs (interfaces) identified between top management and organizational personnel? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Safety Promotion Component (4.0)? Do procedures also define who is responsible for accomplishing the process? Procedure/Output/Measure Does top management promote the growth of a positive safety culture through - • Publication of top management’s stated commitment to safety to all employees? • Visible demonstration of their commitment to the SMS? • Communication of the safety responsibilities for the organization’s personnel? • Clear and regular communication of safety policy, goals, expectations, standards, and performance to all employees of the organization? • An effective employee reporting and feedback system that is non-punitive and provides confidentiality? Design Expectations • Use of a safety information system that provides an accessible efficient means to retrieve information? • Allocation of resources essential to implement and maintain the SMS? Does the organization will periodically measure performance objectives and design expectations of the Safety Promotion Component? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Safety Promotion Component? Bottom Line Assessment Has top management promoted the growth of a positive safety culture and communicate it throughout the organization?
ELEMENT 4.1 COMPETENCIES AND TRAININGp.98
Process 4.1.1 - Personnel Expectations (Competence) Performance Objective The organization will document competency requirements for those positions identified in Element 1.2 and 1.3 and ensure those requirements are met.
Design Expectations Input Are inputs (interfaces) for this process identified between top management and the key safety personnel referenced in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3? Management Responsibility Does the organization clearly identify who is responsible for the quality of the Personnel Expectations Process? Do procedures also define who is responsible for accomplishing the process? Procedure Design Expectations Does the organization identify the competency requirements for safety-related positions identified in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3? Outputs and Measures Does the organization ensure that the personnel in the safety-related positions identified in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3 meet the documented competency requirements of Personnel Expectations Process 4.1.1? Does the organization periodically measure performance objectives and design expectations of the Personnel Expectations Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the personnel qualification and training process? Bottom Line Assessment Has the organization documented competency requirements for those positions identified in Management Commitment and Safety Accountabilities Element 1.2 and Key Safety Personnel Element 1.3 and ensured those requirements were met?
ELEMENT 4.1 COMPETENCIES AND TRAININGp.99
Process 4.1.2 - Training Performance Objective The organization will develop, document, deliver and regularly evaluate training necessary to meet competency requirements of Process 4.1.1. Design Expectations Input Are inputs (interfaces) for the Training Process obtained through the outputs of the SMS and the documented competency expectations of Personnel Expectations Process 4.1.1? Design Expectations Management Responsibility Does the organization clearly identify who is responsible for the quality of the SMS Training Process? Do procedures define who is responsible for accomplishing the process? Procedure Does the organization’s training meet the competency expectations of Personnel Expectations Process 4.1.1 for the personnel in the safety-related positions identified in Management Commitment and Safety Accountability Element 1.2 & Key Safety Personnel Element 1.3? Does the organization consider scope, content, and frequency of training required to meet and maintain competency for those individuals in the positions identified in Management Commitment and Safety Accountability Element 1.2 and Key Safety Personnel 1.3? Does the organization’s employees receive training commensurate with their - • Position level within the organization? • Impact on the safety of the organization’s products or services? Does the organization maintain training currency by periodically - • Reviewing the training? • Updating the training? Outputs and Measures Does the organization maintain records of required and delivered training? Does the organization:
• Identify interfaces between safety lessons learned and the training functions, as well as the interfaces between the training functions and the delivery of training deemed to be necessary to meet competency requirements of (4.1.1)? • Periodically measure performance objectives and design expectations of the SMS Training Process? Controls Does the organization ensure that safety-related training media is periodically reviewed and updated for target populations? Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the SMS Training Process? Bottom Line Assessment Has the organization developed, documented, delivered and regularly evaluated training necessary to meet competency expectations of the Personnel Expectations Process 4.1.1?
ELEMENT 4.2 - COMMUNICATION AND AWARENESSp.101
Performance Objective Top management will communicate the output of its SMS to its employees, and will provide its oversight organization access to SMS outputs in accordance with established agreements and disclosure programs.
Design Expectations Input Are inputs (interfaces) for this process obtained from the outputs of Safety Risk Management (2.0) and Safety Assurance (3.0) Components, including- • Hazard identification? (2.1.2) • Risk severity and likelihood? (2.2.1) • Risk assessments? (2.2.2) • Risk control/mitigation plans? (2.2.3) • Safety lessons learned? • Results of analysis of data? (3.1.7) Management Responsibility Does the organization clearly identify who is responsible for the quality of the Communication and Awareness Process? Do procedures define who is responsible for accomplishing the process? Procedure/Output/Measure Does the organization ensure it communicates outputs of the SMS, rationale behind controls, preventive and corrective actions and ensures awareness of SMS objectives to its employees? Does the organization ensure it provides its oversight organization access to the outputs of the SMS in accordance with established agreements and disclosure programs? Design Expectations Does the organization interface with other organization’s SMSs to cooperatively manage issues of mutual concern? Does the organization will periodically measure performance objectives and design expectations of the Communication and Awareness Process? Controls Does the organization ensure that:
• Procedures are followed for safety-related operations and activities? • They periodically review supervisory and operational controls to ensure the effectiveness of the Communication and Awareness Process? Bottom Line Assessment Has top management communicated the output of its SMS to employees and provided its oversight organization access to SMS outputs in accordance with established agreements and disclosure programs?
APPENDIX B. DESIGNATION OF THE ACCOUNTABLE EXECUTIVEp.103
A. Designation of the Accountable Executive (a) The Accountable Executive (AE) is the person who has corporate authority to direct and control the organization at the highest level and is ultimately accountable for the safety of the certificated activities in the organization. Often, the Accountable Executive is known as an Accountable Manager or Chief Executive Officer (CEO) where these terms are interchangeably used. The requirements that should be followed while nominating/designating an Accountable Executive candidate are as follows:
1. The owner of an establishment or sole proprietorship organization can self-nominate himself as the Accountable Executive if the organization is registered under his/her proprietorship. The proprietor can also nominate an appointee to function as the Accountable Executive where the nominee must also, sign the application/declaration of the Accountable Executive (Attachment-4 to this Appendix).
2. The major stockholder can self-nominate himself as the Accountable Executive if the organization is registered under Partnership. The majority stakeholder can nominate another partner or an appointee as the Accountable Executive. In such a case, the majority stakeholder and nominee Accountable Executive must sign the application/declaration accordingly (Attachment-4 to this Appendix).
3. The chairman of the board of a Joint Stack Company or a Corporate entity can nominate any board member or an appointee Chief Executive Officer (CEO) to function as the Accountable Executive. In such a case, both the chairman, board of directors, and the nominee Accountable Executive must sign the application/declaration (Attachment-4 to this Appendix). . The Accountable Executive nomination/designation flow chart is given in Attachment-1 to this appendix.
(b) The Accountable Executive must have corporate authority on financial controls, human resources control, and organizational operational controls. He must hold this potion at the highest level and ultimately be accountable for the safety of the certificated activities in the organization. Refer to the Accountable Executive Selection Questionnaire given in Attachment-2 to this appendix.
(c) The validation process involves verification of one or more of the following documents as appropriate to the organization set up. (1) Company registration document (2) Partnership deal showing the ownership (3) Registration document for proprietorship (d) For the Accountable Executive nominee who does not clearly and fully meet the existing selection criteria as given in the flow chart of Attachment-1 and the questionnaire given in Attachment-2, a person whose decisions can be overruled by a superior individual or board, the supplementary questions given in Attachment-3 to this appendix must be answered to confirm the validity of nomination/designation.
(e) Upon fulfilling the requirements of a nominee to function as the Accountable Executive, the proprietor, major stakeholder, or board of directors as appropriate to the organization, must submit the nomination/designation to GACA. The format for nomination/designation is given in Attachment-4 to this appendix.
(f) GACA Inspectors will follow the validation checklist given in Attachment-5 to this appendix, to confirm that the nominee Accountable Executive meets all requirements to be designated as the Accountable Executive. The validation process involves verifying documentary evidence during the initial audit, where the signed nomination/designation letter (Attachment-4 to this appendix) should be submitted to GACA. A letter of nomination/designation acceptance will be issued to the nominee Accountable Executive upon satisfactory validation of the nomination/designation.
B. Acceptance of Key Management Personnel (a) Designation of management personnel varies with various service providers such as Air Operators, Aircraft Maintenance and Repair Organizations, Aerodrome Operators, Air Navigation Service Providers, and Ground Services as stipulated in the respective GACARs.
(b) The Accountable Executive shall nominate a person or group of persons as key management personnel/post holders under the provisions of the specific GACAR for the GACA regulated entity, whose responsibilities include ensuring that the organization complies with the requirements of such GACAR for particular functions. All such person(s) must ultimately be responsible to the Accountable Executive.
(c) The person or persons nominated must represent the certificated organization management structure and be responsible for all functions specified under the positions in the respective GACARs. C. Organizational Structure (a) Service Providers must have clearly defined lines of safety accountability throughout the organization, including direct accountability for safety on the part of senior management.
(b) The guiding principles for management structure are as follows: (1) Full safety accountability should reside at the top level of the organization’s Accountable Executive (e.g., Chief Executive Officer).
(2) The Accountable Executive (AE) must be supported by an independent safety support function that operates with the full authority of the AE. (3) Individuals within the safety support function should have respect and influence.
(4) There should be formal communications from the AE to the safety support function. (5) Actions necessary to support the SMS functions should be managed throughout the organization. (6) Safety accountabilities and responsibilities should be documented and understood by the personnel at all organizational levels.
(7) Aviation organizations may vary in size and complexity. Safety responsibilities vary with the organization’s size, complexity, and structure. The flowchart provided in Attachment-6 to this Appendix provides the required guidance for the determination of safety responsibilities in on organization. For the purposes of determining safety responsibilities in an organization, small organizations are those which employ 20 employees or less.
Attachment-1 – Accountable Executive Selection Flow Chart Attachment-2 – Accountable Executive Selection Questionnaire Attachment-3 – Supplemental Questions to validate Accountable Executive nomination The following questions can be used to validate a prospective Accountable Executive who does not clearly and fully meet the existing selection criteria. In particular, a person whose decisions can be overruled by a superior individual or board.
With reference to the Accountable Executive Selection Flow Chart and company organization chart, determine the clear authority of the nominee Accountable Executive. If there is no clear authority, in other words, if the nominee Accountable Executive report to someone who is not the stakeholder or owner of the organization, seek clear answers to the following questions for validation.
Name of the legal entity: Does the certificate holder have other certificates? If so, List all the certificated organizations: … … … … … … … … … … . … … … … … … … … … … . … … … … … … … … … … . The following questions are directed to the prospective accountable executive:
(a) Are you the owner, a major shareholder, or an employee? (b) Were you considered to be a representative of the Certificate Holder to sign off approved manuals prior to the implementation of the Accountable Executive regulations? If no, who holds this authority? (c) Whom do you report to? (d) In the case of multiple certificates, do you have authority over all certificates? (e) Can any of your decisions be over-ruled? If so, who has this authority? (f) Do you have final authority over market objectives? If not, who holds this authority? (g) Do you control when, where and under what conditions aircraft are operated? If not, who holds this authority (in case of Air Operators)? (h) Do you have control over the hiring of technical employees and approved organization managers? If not, who holds this authority? (i) Can new employees be imposed on you? Attachment-4 – Nomination of an Accountable Executive Attachment-5 – Supplemental Questions to validate Accountable Executive nomination Attachment-6 – Determining Safety Responsibilities in an Organization Note: This flow chart shows a typical example of an Air Operators. Similar approach may be followed for Aircraft Maintenance Organization, ANS and Aerodrome operators.
C. Organizational Structure (a) Service Providers must have clearly defined lines of safety accountability throughout the organization, including direct accountability for safety on the part of senior management.
(b) The guiding principles for management structure are as follows: (1) Full safety accountability should reside at the top level of the organization’s Accountable Executive (e.g., Chief Executive Officer).
(2) The Accountable Executive (AE) must be supported by an independent safety support function that operates with the full authority of the AE. (3) The independent safety support function, headed by the safety manager, shall be independent of any operational responsibilities to avoid any potential conflict of interest (4) Individuals within the safety support function should have respect and influence.
(5) There should be formal communications from the AE to the safety support function. (6) Actions necessary to support the SMS functions should be managed throughout the organization. (7) Safety accountabilities and responsibilities should be documented and understood by the personnel at all organizational levels.
(8) Aviation organizations may vary in size and complexity. Safety responsibilities vary with the organization’s size, complexity, and structure. The flowchart provided in Attachment-6 to this Appendix provides the required guidance for the determination of safety responsibilities in on organization. For the purposes of determining safety responsibilities in an organization, small organizations are those which employ 20 employees or less.
APPENDIX D. SAFETY MANAGER ACCEPTANCE APPLICATION FORMp.122
The referred form, the Safety Manager Acceptance Application Form (GACA-AVSES-SRM-F001), can be downloaded from GACA's website under the relevant forms. URL: https://gaca.gov.sa/rules-and-regulations-category/aviation-safety-and-environmental- sustainability/forms/gaca-avses-srm-f001---safety-manager-acceptance-application-form
APPENDIX E. SAFETY MANAGEMENT SYSTEM EFFECTIVENESS CHECKLISTp.123
Purpose. The Kingdom of Saudi Arabia’s (KSA) State Safety Program (SSP) is built on dynamic safety Assurance foundations and as Part of the SMS Implementation, Assurance is carried out from the GACA general departments on the effectiveness of the Certificate holders Safety Management System. This checklist is to be used by the inspectors on their Surveillance visits to measure how effective the SMS is for each service provider The referred checklist, the GACA SMS Effectiveness Assessment Tool (GACA_AVSES_SRM_CL-5), can be downloaded from GACA's website under the relevant forms.
URL: https://gaca.gov.sa/rules-and-regulations-category/aviation-safety-and-environmental- sustainability/forms/safety-management-system-manager-acceptance-application-form
2.5.1.1. BACKGROUND.p.124
A. The objective of this chapter is to provide guidance for accepting or rejecting an aviation organization’s Safety Management System (SMS). B. The SMS acceptance guidance addresses two important scenarios for an aviation organization’s SMS proposal:
1) To accept or reject the development and implementation of a SMS, that complies with General Authority of Civil Aviation Regulation (GACAR) Part 5, for a new aviation organization seeking initial certification.
2) To accept or reject the development and implementation of a SMS, that complies with GACAR Part 199, for a current aviation organization eligible for the phased implementation process provided by the regulation.
2.5.1.3. SCOPE AND REFERENCES.p.124
A. Scope. This guidance is for the acceptance of a proposed SMS developed by aviation organizations (for example, air operators, flight training organizations, repair stations, air traffic service providers, and aerodrome operators) requiring certification/authorization under the GACARs.
B. References. This guidance is in accordance with the following documents: • GACA eBook Volume 2, Chapter 2, SMS Framework • GACA eBook Volume 2, Chapter 3, Guidance for SMS Phased Implementation • GACA eBook Volume 2, Chapter 4, SMS Assessments • ICAO Annex 1, Personnel Licensing • ICAO Annex 6, Operation of Aircraft • ICAO Annex 11, Air Traffic Services • ICAO Annex 14, Aerodromes • International Civil Aviation Organization (ICAO) Document 9859 (as amended), ICAO Safety Management Manual (SMM) • ICAO Document 9734, Safety Oversight Manual
2.5.2.1. NEW AVIATION ORGANIZATION CERTIFICATION – SMS ACCEPTANCE.p.126
A. General. It is a responsibility of the General Authority of Civil Aviation (GACA) aviation safety inspector (Inspector) to make an assessment of a proposed Safety Management System (SMS) submitted by a prospective aviation organization as part of the overall certification process. The assessment and acceptance of the proposed SMS will be by determination that the proposal is in accordance with the SMS framework described in Chapter 2 of this Volume. The SMS framework is composed of components, elements, and processes, each of which is explained in terms of its functional expectations, or how they would need to be used in order to contribute to an effective SMS. The assessment activities related to the acceptance of an SMS are focused primarily on whether the applicant has implemented an SMS that meets all of the design expectations (i.e. design assessments) defined in the SMS framework. The assessment of the actual performance of the SMS (i.e.
performance assessments) occurs after the initial SMS acceptance. The assessment (surveillance) of SMS performance is described in greater detail in Volume 12, Chapter 19 of this handbook. B. Requirements. The objectives and expectations outlined in the framework represent the minimum standard for an aviation organization to develop and implement in order to comply with the SMS requirements as specified in General Authority of Civil Aviation Regulation (GACAR) Part 5. The framework describes the objectives and expectations for an aviation organization’s SMS.
1) The framework is intended to address only operational and support processes and activities that are related to aviation safety and not to address those related to occupational safety, environmental protection, or customer service quality.
2) In addition, aviation organizations are responsible for the safety of services or products they purchase or contract from other organizations. 3) The framework establishes the minimum objectives and expectations for an effective and compliant SMS. Aviation organizations may establish additional or stricter requirements.
C. Assessment Tools and Techniques. As the Inspectors work through the process of assessing the proposed SMS for a new aviation organization’s certification, there are two primary determinations that must be made to find the proposal acceptable: first, that the proposed SMS includes all the items required by the SMS framework described in Chapter 2 of this Volume; and, second, that the design expectations required by the framework have been adequately met by the documentation in the proposed SMS.
1) The primary tool for assessing the content of the proposed SMS is the SMS Assessment Guide that is found in Chapter 4, Appendix A of this Volume. 2) The SMS Assessment Guide was developed to aid in the assessment of the design of aviation organizations’ SMS programs in order to ensure that they comply with the SMS requirements specified in GACAR Part 5.
3) For each required component, element and process, the SMS Assessment Guide includes: • A brief statement of the performance objective • A series of questions that are used to assess (i.e. evaluate) whether the design expectations have been met • A “bottom line assessment” question is also included but this is only used during the periodic performance assessments which are explained in greater detail in Volume 12 on SMS surveillance 4) Assessors should ask each question that pertains to the component, element or process under review and document their observations. From these assessments, the determination is made whether the SMS is meeting the minimum standards as specified in GACAR Part 5.
D. Aviation Organization’s Safety Performance Indicators and Targets. In accordance with the guidance provided in Section 4 of this Chapter, the Inspector must review and evaluate the aviation organization's proposed SPIs and SPTs to determine their acceptability. This assessment shall be conducted in line with the evaluation process outlined in Appendix B to Chapter 5. The Inspector must either decide if the proposed SPIs and SPTs are acceptable to GACA based on this evaluation or the aviation organization needs to adjust their SPI’s and/or SPT’s.
E. Pass/Fail Indicators for Determining SMS Acceptance. In order to use the results of the assessment process using the SMS Assessment Guide (Chapter 4, Appendix A) to determine that the SMS proposal is acceptable, the Pass/Fail criteria (labeled as Acceptance Criteria) located in
Appendix A of this Chapter must be applied. If the aviation organization’s SMS fails to meet any ofp.127
the acceptance criteria they must be informed of the areas in which they are lacking and these items must be brought into full compliance before the SMS can be formally accepted. NOTE: All items listed in the acceptance criteria must be satisfied for the SMS to be acceptable to GACA.
F. Formal SMS Acceptance. For new certification programs (i.e. new aerodrome, new repair station, new air operators, etc.) the formal acceptance of the SMS occurs at the time of certificate issuance. In accordance with the requirements of GACAR Part 5, the SMS is considered formally accepted when the President of the GACA, or his representative designated for the purpose of SMS acceptance, has specifically endorsed the aviation organization certification documentation for SMS acceptance, A sample letter to communicate formal SMS acceptance is in Section 5 of this Chapter.
2.5.3.1. CURRENTLY CERTIFICATED AVIATION ORGANIZATIONS - SMS ACCEPTANCE.p.129
A. General. It is a responsibility of the General Authority of Civil Aviation (GACA) aviation safety Inspector (Inspector) to accept or reject implementation plans and implementation levels (phases) of a proposed Safety Management System (SMS) submitted by aviation organizations (air operators, repair stations, flight training organizations, aerodromes, etc.) that are eligible for a phased implementation of a SMS, as provided for by the General Authority of Civil Aviation Regulations
(GACAR).p.129
B. The overall objective of this section is to assist the GACA Inspector in making a determination whether to accept or reject an aviation organization’s SMS implementation activities in order to ensure compliance with the GACAR phased implementation requirements.
2.5.3.3. APPLICABILITY. This Inspector guidance is designed for use in determining the acceptability of an existing aviation organization’s SMS phased implementation activities. This implementation guidance is based on the SMS Framework in Chapter 2 of this Volume and the SMS Phased Implementation activities described in Chapter 3 of this Volume.
2.5.3.5. REFERENCES. The following references are recommended for Inspectors tasked with assessing the acceptance of the development and implementation of an SMS: • International Civil Aviation Organization (ICAO) Document 9859 (as amended), ICAO Safety Management Manual (SMM) – especially the chapter which addresses the PHASED APPROACH
TO SMS IMPLEMENTATIONp.129
• SMS Framework Guidance in Chapter 2 of this Volume • SMS Phased Implementation Guidance in Chapter 3 of this Volume • SMS Assessment Guidance in Chapter 4 of this Volume
2.5.3.7. ROLE OF THE INSPECTOR IN SMS PHASED IMPLEMENTATION.p.129
A. Engagement with the aviation organization during SMS development and implementation will provide the GACA Inspector with an opportunity to learn, often in great detail, about the aviation organization’s management systems, safety programs and safety culture, as well as providing an ideal opportunity for interfacing with the aviation organization’s management.
B. The GACA will be responsible for reviewing the aviation organization’s implementation plan and its accomplishment at each maturity level of the SMS implementation. C. Specifically, the GACA is responsible to:
• Review gap analysis • Review and accept the aviation organization’s SMS implementation plan and other documents • Discuss the requirements of the exit criteria for all implementation phases with the aviation organization. Exit criteria are those SMS development activities that must be completed prior to moving to the next implementation phase
2.5.3.9. SMS PHASED IMPLEMENTATION STRATEGY.p.130
A. Phased Implementation. Initial SMS implementation strategy follows a four-phased process similar to that outlined in the International Civil Aviation organization (ICAO) Safety Management Manual (SMM). The phases of implementation are arranged in four levels of implementation “maturity”. The timeline and milestone requirements for each implementation phase are according to the requirements outlined in the GACAR. The four phases and implementation levels) of phased SMS implementation are:
• Level 1 (Phase 1) — Planning & Organizing SMS implementation • Level 2 (Phase 2) — Reactive Processes, Basic Safety Risk Management • Level 3 (Phase 3) — Proactive Processes, Looking Ahead • Level 4 (Phase 4) — Continuous Improvement, Continued Assurance B. The details of the contents of the implementation levels, including the requirements for moving from each implementation level to the next, are contained in Chapter 3 of this Volume.
2.5.3.11. GAP ANALYSIS. The phased implementation of an SMS requires an aviation organization to conduct an analysis of its system to determine which components and elements of an SMS are currently in place and which components and elements must be added or modified to meet the SMS requirements. This analysis is known as gap analysis, and it involves comparing the SMS requirements against the existing systems of the aviation organization.
2.5.3.13. IMPLEMENTATION PLAN. Based on the results of the gap analysis process, an implementation plan is prepared to fill the gaps, the gaps being those elements in the SMS Framework that have not completely met functional expectations (e.g., are not already being performed) by the aviation organization. The SMS implementation plan is a realistic strategy for the implementation of an SMS that will meet the aviation organization’s safety objectives and SMS regulatory requirements while supporting effective and efficient delivery of services. It describes how the aviation organization will achieve its corporate safety objectives and how it will meet any new or revised safety requirements, regulatory or otherwise.
2.5.3.15. REVIEW OF THE AVIATION ORGANIZATION GAP ANALYSIS. In preparation for determining the acceptability of the aviation organization’s implementation plan for the phased implementation process, the Inspector must become familiar with the details of the gap analysis conducted in order to construct the implementation plan.
A. GACA Inspectors must ensure that the gap analysis produced by the aviation organization represents a credible evaluation of the state of SMS implementation at the time the aviation organization commences formal SMS implementation activities. Inspectors should utilize their knowledge of the aviation organization (including, but not limited to, its existing programs, systems and organization structure) to ensure that the gap analysis represents a true picture of the aviation organization’s gaps. Inspectors should question management personnel of the aviation organization if they have any doubts to the credibility of the gap analysis. In addition, Inspectors are reminded to consult with their supervisor should it become apparent that disagreements in the actual level of implementation are becoming evident.
B. The Inspector does not formally accept or reject the gap analysis, but they need to ensure that it is credible in order have confidence that the proposed implementation plan is adequately addressing the full scope of required implementation activities required.
2.5.3.17. REVIEW AND ACCEPTANCE OF THE AVIATION ORGANIZATION’Sp.131
IMPLEMENTATION PLAN. The SMS implementation plan defines the organization’s approach to managing safety. It must be a realistic strategy for the implementation of an SMS that will meet the organization’s safety objectives.
A. The Inspector will review the contents of the proposed implementation plan to assure the following items are addressed: 1) That all items identified from the gap analysis as needing to be developed are included in the implementation.
2) That all items in the implementation plan are addressed in an appropriate implementation phase, as described in Chapter 3 of this Volume. NOTE: Aviation organizations may choose to accelerate the implementation of certain SMS requirements to an earlier phase than prescribed by the phased implementation requirements but they may not delay the implementation to a later phase.
3) That the implementation plan is designed with time goals that meet the schedule requirements in the GACAR for phased implementation. B. The proposed implementation plan must show that it has been reviewed and endorsed by the top management, including the accountable executive of the aviation organization.
C. If the implementation plan is found to be acceptable, the Inspector will notify the aviation organization in writing of its acceptance. If the implementation plan is found to be unacceptable, the Inspector will notify the aviation organization in writing of its rejection. Templates for those notification letters are included in Section 5 of this chapter. In the case of rejection of the plan, the Inspector, in consultation with their supervisor, must begin appropriate compliance enforcement actions in accordance with the procedures found in Volume 13 of this handbook.
LEVEL OF PHASED IMPLEMENTATION.p.132
A. As described in detail in Chapter 3 of this Volume, each level of the phased implementation process contains specific items the aviation organization must accomplish prior to receiving approval to move to the next level of phased implementation.
B. See Figures 2.5.3.1 through 2.5.3.3 for worksheets for Inspectors to use to document that the requirements for moving to the next implementation level have been met. C. Agreement on the Aviation Organization’s Safety Performance Indicators and Targets. In accordance with the discussion in Section 4 of this Chapter, the Inspector must ensure that agreement is reached and documented as to the safety performance indicators, values and targets (goals) as part of the acceptance process in Phases 3 and 4 of the phased implementation.
D. Acceptance or Rejection of Moving to the Next Level. If the assessment for moving to the next level (phase) is found to be acceptable, the Inspector will notify the aviation organization in writing of its acceptance. If the assessment is found to be unacceptable, the Inspector will notify the aviation organization in writing that the proposed move to the next level is rejected. Templates for those notification letters are included in Section 5 of this Chapter. In the case of rejection of the move to the next level in accordance with the requirements of the GACAR, the Inspector will initiate compliance enforcement activities in accordance with the procedures in Volume 13 of this handbook.
2.5.3.21. FINAL SMS ACCEPTANCE.p.133
A. At the successful end of the phased implementation process, the Inspector must formally accept the SMS. B. If the SMS is found to be complete and acceptable, the Inspector will notify the aviation organization in writing of its acceptance. A template for the notification letter is included in Section 5 of this Chapter. In the notification of acceptance to the aviation organization, it must be made clear that it is a requirement for the aviation organization to maintain the SMS so that it continues to comply with the requirements of the GACAR Part 5.
DOCUMENTATIONp.133
Level One expectation results Date reviewed Pass Fail 1. Objective evidence of top management's commitment to implement SMS, define safety policy and convey safety expectations and objectives to its employees 2. Objective evidence of top management's commitment to insure adequate resources are available to implement SMS 3. Designation of an accountable executive who will be responsible for SMS development 4. Definition of Safety-related positions for those who will participate in SMS development and implementation 5. Completed gap analysis on the entire organization for all elements of the SMS Framework
DOCUMENTATIONp.134
6. Completed comprehensive SMS Implementation Plan for all elements to take the organization through Level 4 7. Identified safety competencies required, completed training appropriate to Level 1, implementation phase identified for competencies required, and a training plan for all employees 8. Management Commitment document.
◻ Yes ◻ No 9. Safety Policy. ◻ Yes ◻ No 10. Summary of SMS Implementation Plan ◻ Yes ◻ No 11. SMS Training Plan for all employees ◻ Yes ◻ No GACA Inspector Signature ____________________________________ Date _______________ Figure 2.5.3.2. Level 2 GACA Inspector Work Sheet for Aviation Organization SMS Documentation
DOCUMENTATIONp.134
Level Two expectation results Date reviewed Pass Fail
DOCUMENTATIONp.135
1. Processes and procedures documented for operating the SMS to the expectations of Level Two, including the analysis, assessment, and application of mitigating/corrective actions to known deficiencies in safety management practices and operational processes 2. Develop documentation relevant to SMS implementation plan and Safety Risk Management (SRM) components (reactive processes) 3. Document and initiate non-punitive employee reporting and feedback program 4. Completed SMS training for the staff directly involved in the SMS process and initiated training for all employees to at least the level necessary for the SMS reactive processes 5. Apply Safety Risk Management (SRM) processes and procedures to at least one known (existing) hazard and initiate the mitigation process to control / mitigate the risk associated with the hazard 6. Update comprehensive SMS implementation plan for all elements to take the aviation organization through Level 4
DOCUMENTATIONp.136
7. Objective evidence that SRM processes and procedures have been applied to at least one existing hazard and that the mitigation process has been initiated ◻ Yes ◻ No 8. Updated comprehensive SMS implementation plan (or summary) for all elements to take the aviation organization through Level 4 ◻ Yes ◻ No 9. Updated SMS Training Plan for all employees ◻ Yes ◻ No GACA Inspector signature____________________________ Date:_______________________ Figure 2.5.3.3. Level 3 GACA Inspector Work Sheet for Aviation Organization SMS Documentation
DOCUMENTATIONp.136
Level Three expectation results Date reviewed Pass Fail 1. Demonstrated performance of Level 2 Expectations 2. Objective evidence that all SMS processes are being updated, maintained and practiced 3. Objective evidence that the Safety Risk Management (SRM) process has been conducted on all Component 2.0, operating processes 4. Objective evidence of compliance with Process 2.1.1
DOCUMENTATIONp.137
5. Objective evidence of compliance with Element 3.2 6. Objective evidence of compliance with Element 4.1 7. Objective evidence of compliance with Process 4.1.1 8. All applicable SMS processes and procedures must have been applied to at least one existing hazard and the mitigation process must have been initiated 9. Complete SMS training for the staff directly involved in the SMS process to the level of accomplishing all SMS processes 10. Complete employee training commensurate with the requirements of Level 3 11. Objective evidence that SRM processes and procedures have been applied to all Component 2.0 operating processes ◻ Yes ◻ No 12. Objective evidence that SRM processes and procedures have been applied to at least one existing hazard and that the mitigation process has been initiated ◻ Yes ◻ No 13. Updated comprehensive SMS implementation plan for all elements ◻ Yes ◻ No 14. Updated SMS Training Plan for all employees ◻ Yes ◻ No
DOCUMENTATIONp.138
GACA Inspector signature____________________________ Date:_______________________
TARGETS.p.139
A. Fundamental to safety management is the concept of continuous improvement and in order to achieve this it is necessary to have an ongoing knowledge of the level of safety within the system of the aviation organization. Determining the level of safety achieved, and comparing this to the minimum acceptable level of safety established by the aviation organization’s safety policy and objectives, depends upon identifying, measuring and tracking safety indicators relative to safety targets established.
B. As part of the Aviation Safety Inspectors’ (Inspectors) oversight responsibilities, it is essential to review, validate, and evaluate the aviation organization’s Safety Performance Indicators (SPIs), Safety Performance Targets (SPTs), the associated target values set as performance goals, and the organization’s plan for monitoring indicator trends and implementing actions to achieve these targets. The review process also serves as a basis for determining the acceptability of the proposed SPIs and SPTs, in alignment with the evaluation criteria referenced in Appendix B to Chapter 5.
C. In order to support the evaluation of a specific aviation organization’s Safety Performance Indicators (SPIs), associated values, and Safety Performance Targets (SPTs), the organization’s distinct operational environment must be carefully considered. This contextual understanding ensures that the proposed indicators and targets are appropriately tailored to the organization’s operational scope and are acceptable to GACA in accordance with the process described in
Appendix B to Chapter 5.p.139
D. The certificate holder must continuously monitor its safety performance against the evaluated Safety Performance Indicators (SPIs), values, and targets, in accordance with GACAR Part 5 –
Subpart D – Safety Assurance - § 5.73 Safety Performance Assessment (c) While the organizationp.139
may introduce additional SPIs or SPTs as appropriate, any modification, adjustment, or removal of previously evaluated indicators or targets requires re-evaluation by the Inspector. All evaluated or newly introduced SPIs and SPTs shall be subject to periodic review as part of the Inspector’s ongoing oversight and surveillance activities (refer to Volume 12, Chapter 19 for further guidance).
E. Each certificate holder that is required to have a Safety Management System (SMS) under the applicability of GACAR Part 5 must submit their Safety Performance Indicators (SPIs) and Safety Performance Targets (SPTs), including relevant traffic movement data—reported monthly, quarterly, and/or annually—as part of their periodic review by GACA.
Part 151 certificate holders are required to submit their SPIs and SPTs, including working hoursp.140
data, for periodic review by the Inspector. Part 145 certificate holders, authorized and located within KSA, are required to submit their SPIs and SPTs, including maintenance hours per aircraft type, for periodic review by the Inspector. All air operators must submit their SPIs including total fleet size, aircraft types, and type of operations. These submissions will support the Inspector’s periodic review and ongoing oversight activities.
Section 5. Communication with Aviation Organizationsp.141
2.5.5.1. LETTERS OF ACCEPTANCE OR REJECTION. For all General Authority of Civil Aviation (GACA) oversight activities involving an application or proposal by an aviation organization and the acceptance or rejection by the GACA, the acceptance or rejection will be in writing. Sample response letters are located in Figures 2.5.5.1 through 2.5.5.6, and include the following subjects:
• Implementation plan acceptance/rejection • Level achievement acknowledgement for the phased implementation • Final overall SMS acceptance • Acknowledgement of continued compliance and agreement on safety indicators and targets Figure 2.5.5.1. Sample Letter of SMS Implementation Plan Rejection [Aviation Organization] [Address ] Dear [Name]:
This is to inform you that the GACA Inspector team has reviewed and found your Implementation Plan for the Safety Management System (SMS) phased implementation as provided for by the GACAR to be unacceptable in its present state. The reasons for the rejection of your Implementation Plan are as follows:
Please respond no later than [Date] with your plans for correcting the identified deficiencies that will result in an acceptable implementation plan. Continuing to provide an implementation plan that is considered unacceptable will be considered to be a violation of the phased implementation regulations contained in GACAR Part 5 and will result in appropriate compliance enforcement measures.
Sincerely, GACA Inspector Signature Figure 2.5.5.2. Sample Letter of SMS Implementation Plan Acceptance [Aviation Organization] [Address] Dear [Name]: This is to inform you that the GACA Inspector team has reviewed and accepted your Implementation Plan for the Safety Management System (SMS) phased implementation as provided for by GACAR Part 199.
You are now considered to be in Implementation Level 1, Planning and Organization, and you must proceed with the development activities as required for proceeding to the next implementation levels. Sincerely, GACA Inspector Signature Figure 2.5.5.3. SMS Final Acceptance Letter [Date] [Accountable Executive] [Aviation Organization] [Address] Dear [Name]:
This is to notify you the GACA staff have fully assessed and accepted the Safety Management System (SMS) of [SP Name]. This is the final step in the acceptance process your SMS is now considered to be fully operational.
As an aviation organization certificated/authorized under the GACAR, you are required to continuously monitor and maintain the operation of your SMS, and any deviations from this requirement will be cause for the GACA to reassess the ongoing acceptability of your SMS. Any proposed changes to the accepted SMS must be submitted to and accepted by the GACA prior to their implementation.
The GACA will be conducting periodic surveillance of the SMS operation. Sincerely, GACA [Representative] Figure 2.5.5.4. Sample Letter of SMS Level Acceptance [Aviation Organization] [Address] Dear [Name]:
This letter is to inform you that we have reviewed and accepted your Implementation Level _, documentation as required by your participation of the SMS phased implementation process provided by GACAR Part 199. You are now authorized to continue in the development of Level (_) in accordance with your accepted implementation plan.
Sincerely, GACA Inspector Signature Figure 2.5.5.5. Sample Letter of an unacceptable SPI’s/SPT’s. [Aviation Organization] [Address] Dear [Name]: This is to inform you that GACA has reviewed and evaluated your Safety Performance Indicators and Targets (SPI’s & SPT’s) of [SP Name] and found it to be unacceptable in its present state. The reason(s) for considering the submitted SPI’s to be unacceptable is(are) as follows:
(inspectors to describe the reasons for not finding the SPI’s acceptable and what is expected from the Aviation Organizations) Please provide a respond no later than [Date] with your updated SPIs SPTs.
Sincerely, GACA Inspector Signature Figure 2.5.5.6. Sample Letter of an acceptable SPI’s/SPT’s. [Aviation Organization] [Address] Dear [Name]: This is to inform you that GACA has reviewed and evaluated your Safety Performance Indicators and Targets (SPI’s & SPT’s) of [SP Name] and found it to be acceptable. Please note that GACA SPI’s & SPT’s evaluation process explicitly means that the evaluated indicators and targets are meeting the following criteria:
1. SPI’s and SPT’s report include the mandatory SPIs outlined in Volume 2, Chapter 5, Appendix B of the E-book. 2. SPI’s aligned with the organization’s safety objectives. 3. SPI’s are relevant to the organization’s aviation operations.
4. Consideration of the size and complexity of its operations. 5. Selected a mix of leading and lagging indicators. 6. SPIs include quantitative monitoring of both high-consequence safety outcomes and lower- consequence events.
7. Safety alert levels and targets are based on statistical methods. 8. The Developed targets are (SMART). 9. SPIs include alert/target settings to define unacceptable performance areas and planned improvement goals.
As an aviation organization certificated/authorized under GACAR Part [xxx], you are required to continuously monitor and evaluate the SPIs and SPTs. Any changes to the evaluated SPI’s and/or SPT’s must be submitted to GACA for a new evaluation.
Sincerely, GACA [Representative]
COMPONENT 1.0 - SAFETY POLICY AND OBJECTIVESp.146
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • All levels of management clearly articulate the importance of safety when addressing company personnel • Management has a clear commitment to safety and demonstrates it through active and visible participation in the safety management system • Management makes the policy clearly visible to all personnel and particularly throughout the safety critical areas of the organization • All personnel understand their authorities, responsibilities and accountabilities in regards to all safety management processes, decision and actions • Safety objectives have been established utilizing a safety risk profile that considers hazards and risks • Objectives and goals are consistent with the safety policy and their attainment is measurable.
• Safety objectives and goals are reviewed and updated periodically • There is a documented process to develop a set of safety goals to achieve overall safety objectives • Safety objectives and goals are documented and publicized • There is controlled documentation that describes the SMS and the interrelationship between all of its elements • Documentation is readily accessible to all personnel • There is a process to periodically review SMS documentation to ensure its continuing suitability, adequacy and effectiveness, and that changes to company documentation have been implemented
ELEMENT 1.1 - SAFETY POLICYp.147
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • A safety policy is in existence, followed and understood • The organization has based its safety management system on the safety policy and there is a clear commitment to safety • The safety policy is agreed to and approved by the accountable executive • The safety policy is promoted by the accountable executive • The safety policy is reviewed periodically for continuing applicability • The safety policy is communicated to all employees with the result that they are made aware of their safety obligations • The policy is implemented at all levels of the organization • Safety objectives have been established utilizing a safety risk profile that considers hazards and risks • Objectives and goals are consistent with the safety policy and their attainment is measurable.
• Safety objectives and goals are reviewed and updated periodically • There is a documented process to develop a set of safety goals to achieve overall safety objectives • Safety objectives and goals are documented and publicized • The organization has a process or system that provides for the capture of internal information including hazards, incidents and accidents and other data relevant to SMS • The reactive reporting system is simple, accessible and commensurate with the size and complexity of the organization • Reactive reports are reviewed at the appropriate level of management • There is a feedback process to notify contributors that their reports have been received and to share the end results of the analysis • The organization has a process in place to ensure confidentiality when requested • The feedback process provides an opportunity for report submitters to indicate whether they are satisfied with the response NOTE: The items above in italics are referring to the Safety Performance Indicators and Targets that require periodic review and agreement by GACA Inspectors as described in Section 4 of this Chapter. Further guidance on the periodic review of Safety Performance Indicators and Targets can also be found in Volume 12, Chapter 19.
ELEMENT 1.2 - MANAGEMENT COMMITMENT AND SAFETY ACCOUNTABILITIESp.148
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • There are documented roles and responsibilities and accountabilities for the accountable executive and evidence that the SMS is established, maintained and adhered to • Those management officials that can make safety risk management decisions are clearly identified, by position • The accountable executive demonstrates control of the financial and human resources required for the proper execution of the SMS responsibilities • Safety authorities, responsibilities and accountabilities are transmitted to all personnel
ELEMENT 1.3 - KEY SAFETY PERSONNELp.148
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • There are documented roles and responsibilities and accountabilities for the accountable executive to ensure the SMS is operating and maintained, and to keep top management informed of its continuing performance • A qualified person has been appointed, in accordance with the regulation, and has demonstrated control of the SMS • All personnel understand their authorities, responsibilities and accountabilities in regards to all safety management processes, decision and actions
ELEMENT 1.4 - EMERGENCY PREPAREDNESS AND RESPONSEp.149
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • There is clear identification of who is responsible for the quality of the Emergency Preparedness and Response Process and associated documentation as well as the procedures and responsibilities for accomplishing the process • There are clearly established emergency response procedures across all operational departments • There is clearly established planning and execution of periodic exercises of the organization’s emergency response procedures • There is emergency preparedness and response training for affected personnel
ELEMENT 1.5 - SMS DOCUMENTATION AND RECORDSp.149
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • There is controlled documentation that describes the SMS and the interrelationship between all of its elements • Documentation is readily accessible to all personnel • There is a process to periodically review SMS documentation to ensure its continuing suitability, adequacy and effectiveness, and that changes to company documentation have been implemented • The organization has a process to identify changes within the organization that could affect company documentation
COMPONENT 2.0 - SAFETY RISK MANAGEMENTp.150
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this Component are obtained from the critical expectations of its systems and operational environment • There is clear identification who is responsible for all aspects of the Safety Risk Management process • The SMS includes, at a minimum, the following processes: System description and task analysis;
Hazard identification; Safety risk analysis; Safety Risk assessment; and Safety risk control and mitigation • The SMS processes apply to initial designs of systems, organizations and products, and to planned changes to operational processes • There are feedback loops between assurance functions described in the Continuous Monitoring Process to evaluate the effectiveness of safety risk controls • There are defined acceptable and unacceptable levels of safety risk • There is defined acceptable risk for hazards that will exist in the short-term while safety risk control/mitigation plans are developed and implemented
ELEMENT 2.1 - HAZARD IDENTIFICATION AND ANALYSISp.150
Process 2.1.1 System Description and Task Analysis Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for the System Description and Task Analysis process are obtained from the Safety Risk Management Component 2.0 • There are system descriptions and task analysis to the level of detail necessary to: Identify hazards;
Develop operational procedures; and Develop and implement risk controls
ELEMENT 2.1 - HAZARD IDENTIFICATION AND ANALYSISp.151
Process 2.1.2 - Identify Hazards Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for the Hazard Identification Process are obtained from the System Description and Task Analysis Process 2.1.1, to include a new hazard identified from the Safety Assurance Component 3.0, failures of risk controls due to design deficiencies found in the System Assessment Process 3.1.8 , and/or from any other source • There is clear identification who is responsible for all aspects of the Hazard Identification process • Hazards are identified for the entire scope of each system, as defined in the system description • Identified hazards are tracked for the entire scope of each system, as defined in the system description
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.151
Process 2.2.1 Analyze Safety Risk Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Hazard Identification Process 2.1.2 • There is clear identification who is responsible for all aspects of the Safety Risk Analysis process • Safety risk analysis functions include: Analysis of existing safety risk controls; Triggering mechanisms; and, Safety risk of a reasonably likely outcome from the existence of a hazard • Reasonably likely outcomes from the existence of a hazard, include estimations of the severity and likelihood
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.151
Process 2.2.2 Assess Safety Risk Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Safety Risk Analysis Process 2.2.1 in terms of estimated severity and likelihood • There is clear identification who is responsible for all aspects of the Safety Risk Assessment process • Each hazard is analyzed for its safety risk acceptability using the safety risk acceptance process as described in Safety Risk Management Component 2.0
ELEMENT 2.2 RISK ASSESSMENT AND CONTROLp.152
Process 2.2.3 Control/Mitigate Safety Risk Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for the Control/Mitigation Safety Risk process are obtained from the Safety Risk Assessment Process 2.2.2 • Residual risk is evaluated when creating safety risk controls and mitigations • Interfaces between the risk control/mitigation functions (this process) and the Safety Assurance Component 3.0 are being identified and documented • Performance objectives and design expectations of the risk Control/Mitigate Safety Risk Process are being reviewed periodically for successful accomplishment
COMPONENT 3.0 - SAFETY ASSURANCEp.152
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this component are obtained from the Safety Risk Management Component 2.0 • There is clear identification who is responsible for all aspects of the Safety Assurance component • There is monitoring of systems and operations to: Identify new hazards; Measure the effectiveness of safety risk controls; and, Ensure compliance with regulatory requirements applicable to the SMS • There is collection of the data necessary to demonstrate the effectiveness of: Operational processes;
and, the SMS • Performance objectives and design expectations of the Safety Assurance Component are being reviewed periodically for successful accomplishment
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.153
Process 3.1.1 - Continuous Monitoring Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this component are obtained from the Risk Assessment Process 2.2.2, Risk Control/Mitigation Process 2.2.3, System Assessment Process 3.1.8 or Preventive/Corrective Action Process 3.3.1 • There is clear identification who is responsible for all aspects of the Continuous Monitoring Process • There is monitoring of operational data (e.g., duty logs, crew reports, work cards, process sheets, and reports from the employee safety feedback system specified in Process 3.1.6 to: Determine whether it conforms to safety risk controls; Measure the effectiveness of safety risk controls; Assess SMS system performance; and, Identify hazards • There is monitoring products and services from contractors • Performance objectives and design expectations of the Continuous Monitoring Process are being reviewed periodically for successful accomplishment
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.153
Process 3.1.2 - Internal Audits by Operational Departments Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this component obtained from the Safety Risk Management Component 2.0 • There is clear identification who is responsible for all aspects of the Safety Assurance component • There is monitoring of systems and operations to: Identify new hazards; Measure the effectiveness of safety risk controls; and, Ensure compliance with regulatory requirements applicable to the SMS • There is collection of the data necessary to demonstrate the effectiveness of: Operational processes;
and, the SMS • Performance objectives and design expectations of the Safety Assurance Component are being reviewed periodically for successful accomplishment
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.154
Process 3.1.3 - Internal Evaluation Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Risk Assessment Process 2.2.2 or Control/Mitigate Safety Risk Process 2.2.3 • There is clear identification who is responsible for all aspects of the Internal Evaluation Process • The planning of the internal evaluation program takes into account: Safety criticality of the processes being evaluated; and, Results of previous evaluations • The definition for evaluations include information about evaluation: Criteria; Scope; Frequency;
Methods; and, Processes used to select the evaluators • The documentation procedures include evaluation responsibilities and requirements for the following: Planning evaluations; Conducting evaluations; Reporting results; Maintaining records;
and, Evaluating contractors and vendors • The person or organization performing evaluations of operational processes are independent of the process being evaluated • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Internal Evaluation Process
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.155
Process 3.1.4 - External Auditing of the SMS Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Control/Mitigate Safety Risk Process 2.2.3 and from the GACA and/or other external agencies • There is clear identification who is responsible for all aspects of the External Auditing Process • The results of oversight organization audits, and other external audit results, are included in the analyses conducted under SMS Framework Analysis of Data Process 3.1.7 • There is periodic review of supervisory and operational controls to ensure the effectiveness of the External Auditing Process
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.155
Process 3.1.5 – Investigation Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Control/Mitigate Safety Risk Process 2.2.3 and as needed upon occurrence of events • There is clear identification who is responsible for all aspects of the Investigation Process • There are requirements to collects data on the following: Incidents; Accidents; and, Potential regulatory non-compliance • There are procedures established to investigate the following: Incidents; Accidents; and, Instances of regulatory non-compliance • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Investigation Process
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.156
Process 3.1.6 - Employee Reporting and Feedback System Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for the Employee Reporting and Feedback System are obtained from employees • There is clear identification who is responsible for all aspects of the Employee Reporting and Feedback Process • There is a confidential Employee Reporting and Feedback System, as in the Safety Promotion component, established and maintained • Data from the Safety Reporting and Feedback System is monitored to identify emerging hazards • Employees are encouraged to use the Safety Reporting and Feedback System without fear of reprisal and are encouraged to submit solutions/safety improvements where possible • Data from the Safety Reporting and Feedback System is monitored to identify emerging hazards • Data collected in the Employee Reporting and Feedback System is included in the analyses conducted under SMS Framework Analysis of Data Process • Performance objectives and design expectations of the Employee Reporting and Feedback Process are being reviewed periodically for successful accomplishment
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.156
Process 3.1.7 - Analysis of Data Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the data acquisition processes 3.1.1 through 3.1.6 • There is clear identification who is responsible for all aspects of the Analysis of Data Process • There are procedures in place to analyze the data collected to demonstrate the effectiveness of the following: Risk controls in the organization’s operational processes (SMS Framework Safety Policy Component; and, the aviation organization SMS • There are procedures in place to analyze the data collected to identify root causes of deficiencies and potential new hazards and evaluate where improvements can be made in the following:
Operational processes (SMS Framework Safety Policy Component); and, the aviation organization SMS • Performance objectives and design expectations of the Analysis of Data Process are being reviewed periodically for successful accomplishment
ELEMENT 3.1 - SAFETY PERFORMANCE MONITORING AND MEASUREMENTp.157
Process 3.1.8 - System Assessment Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the Analysis of Data Process 3.1.7 • There is clear identification who is responsible for all aspects of the System Assessment Process • There are procedures in place, and conducted, to assess the performance and effectiveness of the following: Safety-related functions of operational processes (Safety Policy Component) against their requirements; and, the SMS against its objectives and expectations • There are procedures in place, and conducted, to record system assessments that result in a finding of the following: Conformity or nonconformity with existing safety risk controls and/or SMS expectations, including regulatory requirements; and, New hazards found • There are procedures in place, and conducted, to use the Safety Risk Management (Component 2.0) if risk assessment and risk control performance indicates the following: That new hazards or potential hazards have been found; and/or, That the system needs to be changed • There is periodic review of supervisory and operational controls to ensure the effectiveness of the System Assessment Process
ELEMENT 3.2 - MANAGEMENT OF CHANGEp.157
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the proposed introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to existing system designs, new operations or procedures or modifications to existing operations or procedures.
• There is clear identification who is responsible for all aspects of the Management of Change Process • There are requirements and procedures in place to not implement any of the following until the level of safety risk of each identified hazard is determined to be acceptable for: introduction of new technology or equipment, changes in the operating environment, changes in key personnel, significant changes in staffing levels, changes in safety regulatory requirements, significant restructuring of the organization, physical changes, changes to existing system designs, new operations or procedures or modifications to existing operations or procedures.
• Performance objectives and design expectations of the Management of Change Process are being reviewed periodically for successful accomplishment. • There are requirements in place to not implement any change until the following items are produced:
oPreliminary Safety Analysis (PSA) and TEPIOILQC are considered. oPreliminary Hazards Lists (PHLs). oOperation Readiness case (ORC). • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Management of Change Process
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.158
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained through continuous application of Safety Risk Management (Component 2.0), Safety Assurance (Component 3.0) and the outputs of the SMS, including safety lessons learned • There is clear identification who is responsible for all aspects of the Continuous Improvement Process • There are requirements and procedures in place to continuously improve the effectiveness of the SMS and of safety risk controls through the use of the safety and quality policies, objectives, audit and evaluation results, analysis of data, corrective and preventive actions, and management reviews • Performance objectives and design expectations of the Continuous Improvement Process are being reviewed periodically for successful accomplishment • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Continuous Improvement Process
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.159
Process 3.3.1 Preventive/Corrective Action: Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from System Assessments (Process 3.1.8) with findings of non-performing risk controls • There is clear identification who is responsible for all aspects of the Preventive/Corrective Action Process • There is a requirement and documented action to develop the following: Preventive actions for identified potential nonconformities with risk controls; and, Corrective actions for identified nonconformities with risk controls • There is a requirement and documented action to consider safety lessons learned in the development of both preventive actions and corrective actions • There is a requirement and documented action to take necessary preventive and corrective action based on the findings of investigations • There is a requirement and documented action to prioritize and implement preventive and corrective actions in a timely manner • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Preventive/Corrective Action Process
ELEMENT 3.3 CONTINUOUS IMPROVEMENTp.160
Process 3.3.2 - Management Review Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the outputs of Safety Risk Management (Component 2.0) and Safety Assurance (Component 3.0) activities • There is clear identification who is responsible for all aspects of the Management Review Process • Top management conducts regular reviews of the SMS, including the outputs of the Safety Risk Management Processes, the outputs of the Safety Assurance Processes, and safety lessons learned • Top management includes in its reviews of the SMS, an assessment of the need for improvements to the organization’s operational processes and the SMS • There is a requirement and action to keep records of the disposition and status of management reviews according to the organization’s record retention policy • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Management Review Process
COMPONENT 4.0 - SAFETY PROMOTIONp.160
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) are identified between top management and organizational personnel • There is clear identification who is responsible for all aspects of the Safety Promotion Component • Top management promotes the growth of a positive safety culture through the following:
Publication of top management’s stated commitment to safety to all employees; Visible demonstration of their commitment to the SMS; Communication of the safety responsibilities for the organization’s personnel; Clear and regular communication of safety policy, goals, expectations, standards, and performance to all employees of the organization; An effective employee reporting and feedback system that provides confidentiality; Use of a safety information system that provides an accessible efficient means to retrieve information; and, Allocation of resources essential to implement and maintain the SMS • Performance objectives and design expectations of the Safety Promotion Component are being reviewed periodically for successful accomplishment • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Safety Promotion Component
ELEMENT 4.1 COMPETENCIES AND TRAININGp.161
Process 4.1.1 - Personnel Expectations (Competence) Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are identified between top management and the key safety personnel referenced in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3 • There is clear identification who is responsible for all aspects of the Personnel Expectations Process • There is a requirement and action to identify the competency requirements for safety-related positions identified in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3 • There is a requirement and action to ensure that the personnel in the safety-related positions identified in Management Commitment and Safety Accountabilities Element 1.2 & Key Safety Personnel Element 1.3 meet the documented competency requirements of Personnel Expectations Process 4.1.1 • Performance objectives and design expectations of the Personnel Expectations Process are being reviewed periodically for successful accomplishment • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Personnel Expectations Process
ELEMENT 4.1 COMPETENCIES AND TRAININGp.162
Process 4.1.2 – Training Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for the Training Process are obtained through the outputs of the SMS and the documented competency expectations of Personnel Expectations Process • There is clear identification who is responsible for all aspects of the SMS Training Process • There is implemented training to meet the competency expectations of Personnel Expectations Process 4.1.1 for the personnel in the safety-related positions identified in Management Commitment and Safety Accountability Element 1.2 & Key Safety Personnel Element 1.3 • There is a requirement and action to consider scope, content, and frequency of training required to meet and maintain competency for those individuals in the positions identified in Management Commitment and Safety Accountability Element 1.2 and Key Safety Personnel 1.3 • Employees receive training commensurate with their: Position level within the organization; and, Impact on the safety of the organization’s products or services • There is a requirement and action to maintain training currency by periodically reviewing training and updating the training • There is a requirement and action to maintain records of required and delivered training • Safety-related training media is periodically reviewed and updated for target populations • There is periodic review of supervisory and operational controls to ensure the effectiveness of the SMS Training Process
ELEMENT 4.2 - COMMUNICATION AND AWARENESSp.162
Acceptance Criteria: Evidence of acceptable component and element content and/or activity includes the following: • Inputs (interfaces) for this process are obtained from the outputs of Safety Risk Management Component 2.0 and Safety Assurance Component 3.0 • There is clear identification who is responsible for all aspects of the Communication and Awareness Process • There is a requirement and action to communicate outputs of the SMS, rationale behind controls, preventive and corrective actions and ensure awareness of SMS objectives to its employees • There is a requirement and action in place to provide it’s the GACA access to the outputs of the SMS in accordance with established agreements and disclosure programs • There is interface with other organizations’ SMSs to cooperatively manage issues of mutual concern • Performance objectives and design expectations of the Communication and Awareness Process are being reviewed periodically for successful accomplishment • There is periodic review of supervisory and operational controls to ensure the effectiveness of the Communication and Awareness Process
INDICATORS (SPIS) AND SAFETY PERFORMANCE TARGETS (SPTS)p.164
Introduction Safety Performance Indicators (SPIs) are integral to the monitoring and assessment of safety risks within an organization’s operational framework. They serve as essential tools for tracking both established safety risks and identifying emerging hazards, thereby enabling the timely detection of safety deficiencies.
SPIs are also pivotal in determining the need for corrective actions, ensuring that the organization maintains compliance with safety regulations and continuously improves its safety management system.
These indicators also furnish objective data to enable the GACA to assess the safety performance of an Aviation Organization, as well as evaluate progress toward achieving established safety objectives. In the development of Safety Performance Indicators (SPIs), service providers must account for factors such as the organization’s safety risk tolerance, the cost-benefit analysis of system improvements, and compliance with applicable GACA regulatory requirements. Collaboration with Aviation safety inspectors (Inspectors) is imperative throughout the selection and development process of SPIs, ensuring that the chosen indicators are consistent, aligned with regulatory expectations, and support KSA Sate Safety Program (SSP) and the Aviation Organization overall safety objectives.
The safety performance of a Safety Management System (SMS) is reflected through Safety Performance Indicators (SPIs) and their associated alert and target levels. Monitoring the current performance of each SPI, in conjunction with historical trends, supports the identification of significant deviations in safety outcomes. Alert thresholds and target levels are to be established based on recent historical performance, reflecting realistic and achievable expectations appropriate to the operator or service provider.
Setting alert levels for SPIs is important for risk monitoring, as they distinguish acceptable performance from unacceptable outcomes. A commonly used method to define such thresholds is the standard deviation principle, which uses historical data averages and variability to establish alert criteria for future monitoring periods.
SPIs should encompass both leading and lagging indicators, reflecting a balanced approach that captures safety performance at both proactive and reactive stages. Additionally, they should include high-consequence events, such as accidents and serious incidents, as well as low-consequence events, such as incidents and non-conformances, ensuring a comprehensive view of the organization’s safety performance.
Once Safety Performance Indicators (SPIs), targets and alert levels are established, their performance is to be reviewed and updated on a regular basis. Progress against each SPI should be tracked individually, with an overall performance summary compiled across the full set of indicators. Performance status may be assessed using qualitative methods (e.g., labeling as satisfactory or unsatisfactory) or through quantitative scoring based on the achievement of targets and the avoidance of alert level breaches.
Selecting and defining SPIs SPIs are the parameters that provide the organization with a view of its safety performance: where it has been; where it is now; and where it is headed, in relation to safety. This picture acts as a solid and defensible foundation upon which the organization’s data-driven safety decisions are made. These decisions, in turn, positively affect the organization’s safety performance. The identification of SPIs should be realistic, relevant, aligned with the organization’s safety objectives, and appropriate to the size and complexity of its operations.
SPIs should be: • Aligned with the organization’s safety objectives; • Selected or developed based on available proactive and reactive data sources; • Meaningful, measurable, and appropriate to the size and complexity of the organization's operations;
• Comprised of a balanced mix of both leading and lagging indicators; and • Regularly reviewed to ensure its relevance, effectiveness, and alignment with evolving safety goals and risks. The contents of each SPI should include:
• Clear Definition and Scope: a precise description of what the SPI measures, including its relevance to safety performance; • Purpose and Objectives: a clear statement of the SPI's purpose, specifying its role in monitoring safety performance, the safety risks it addresses;
• Units of Measurement and Calculation Requirements: the specific units of measurement (e.g., incidents per 10,000 flight hours) and the formulas or calculation methods to ensure consistent, accurate tracking; and • Thresholds and Actionable Alerts: defined thresholds, target levels, and alert levels to provide a basis for performance evaluation, and specifying actions to be taken when thresholds are exceeded or targets are not met.
Lagging and Leading SPIs agging Safety Performance Indicators (SPIs) reflect past safety performance and outcomes, providing insight into the effectiveness of the safety management system after safety events have occurred. These indicators help assess whether safety goals have been met and if corrective actions were successful in preventing similar occurrences.
Example: • Number of accidents or serious incidents (high consequence). • Rate of occurrences, such as incidents per 10,000 flight hours (low consequences). • Number of non-conformances or regulatory violations (low consequences).
Leading Safety Performance Indicators (SPIs) are proactive measures that provide early warnings of potential safety risks. They are typically based on data collected before safety events occur, allowing for corrective actions to be taken to prevent accidents or incidents. These indicators help in identifying and mitigating safety risks in real time.
Example: • Percentage of employees who have completed safety training. • Percentage of safety reports processed within a specified timeframe. • Number of safety audits conducted. SPIs & SPTs Development Guide Collect Data:
Gather safety data from relevant sources, such as: • Flight data monitoring (FDM); • Occurrence reports; • Audit reports; • Safety surveys; and • Organization’s risk registers. • Traffic movement (monthly, quarterly and annually) if applicable.
Establish Indicators: Once the data is collected, it will be processed and analyzed to determine monthly occurrence rates, with the results presented in a clear and meaningful table for better interpretation. The results will be organized into a clear, meaningful table, facilitating easy interpretation. This table will then be translated into an informative chart, visually presenting the performance of the selected measurable area across monthly, quarterly, or yearly intervals, ensuring a comprehensive overview of trends and insights.
Method: The occurrence rate for incidents can be calculated using the following formula: Example: Number of incidents in the first quarter = 10; Number of departures = 3,000, therefore; Incident rate per 1000 departures = (10/3000)*1000 = 3.3 Incidents per 1000 departures (*1000 is used since number of actual departures is 3000 during the first quarter) Set Targets:
To establish baseline values for each Safety Performance Indicator (SPI), historical data should be analyzed to determine past performance trends. For example, the target for the current year’s monitoring period may be set to achieve a 5% improvement (lower rate) compared to the average value of the preceding period.
In addition, benchmarking can be used to inform target setting. This involves comparing the organization’s performance with that of similar organizations that have already measured and reported their safety performance. By utilizing external benchmarks, the organization can set realistic and competitive performance targets aligned with industry standards.
Develop SMART Targets: Ensure that targets are Specific, Measurable, Achievable, Relevant, and Time-bound. For example: • Reduce the number of runway incursions by 15% over the next three years or 5% over the next monitoring period.
• Not exceeding two serious incidents annually by the current/following year. Target Achievement: At the end of the current year, if the average rate is lower than the preceding year’s average rate by 5% or more, then the set target of 5% improvement is deemed to have been achieved.
Define Alert Levels: The alert level for a new monitoring period (current year) is determined based on the performance data of the preceding period (preceding year). Specifically, it is derived from the average of the preceding period’s data points and the standard deviation (SD) of those data points.
The three alert lines are as follows: • Alert Line 1: Average + 1 Standard Deviation (SD) • Alert Line 2: Average + 2 Standard Deviations (SD) • Alert Line 3: Average + 3 Standard Deviations (SD) These alert levels are set using basic safety metric standard deviation criteria to help identify deviations from expected performance.
Alert Level Trigger: An alert (abnormal/unacceptable trend) is triggered if any of the conditions below are met for the current monitoring period (current year): • any single point is above the 3 SD line.
• 2 consecutive points are above the 2 SD line. • 3 consecutive points are above the 1 SD line. When an alert is triggered (potential high risk or out-of-control situation), appropriate follow-up action is expected, such as further analysis to determine the source and root cause of the abnormal incident rate and any necessary action to address the unacceptable trend.
Alert and target levels should be reviewed/reset for each new monitoring period, based on the equivalent preceding period’s average rate and SD, as applicable. Validate with Stakeholders: Review and validate proposed SPIs/SPTs with:
• Relevant Department Leads (Ops, Maintenance, etc.) • Relevant Safety Management Groups; and • Accountable Executive. The purpose of the review is to ensure that the SPIs and SPTs framework and selection remains relevant and effective.
Review and Adjust: Effectiveness of the SPIs and SPTs should be reviewed at regular intervals (Quarterly). If necessary, adjust indicators and targets based on safety performance, trends, emerging risks, and safety objectives updates.
Report Submission and Frequency Service Providers must annually submit SPIs and SPTs that are acceptable to the President. the SPIs and SPTs reports must be submitted to the President on a quarterly basis for review and analysis. The SPI’s and SPT’s report must include the total number of recorded incidents and the traffic movement or working hours, as applicable, for the reporting period (monthly, quarterly, and/or annually).
SPIs and SPTs Evaluation For initial certification and as part of the SMS initial acceptance, GACA inspector must review and evaluate the organization submitted SPIs and SPTs to determines if they are acceptable utilizing the SPI’s & SPT’s evaluation checklist, which ensures the submitted SPIs and SPTs meet the required criteria and methodology. (GACA-AVSES-SRM-F020). The SPI’s & SPT’s evaluation must be carried out annually to ensure SPI’s & SPT’s remain acceptable.
Each item in the checklist has a specific weight in percentages, contributing to an overall score. However, the first 3 items in part 1 are mandatory, if any of them is not fulfilled, the submitted SPIs and SPTs will be considered unacceptable to GACA.
For the remaining items, their combined weight sums to 100%. This means that the total percentage is distributed only among these items, ensuring proper assessment of compliance while maintaining the first 3 crucial criteria.
Evaluation criteria by weight: • >= 90%: Acceptable. • <90% and >75%: Acceptable with comments and recommendations. • <75%: Not acceptable Submission to Safety & Risk Management General Department (SRM GD) Once SPIs and SPTs evaluation is complete and final result is acceptable, the checklist must be submitted to the SRM General Department for review and record keeping for State Safety Assurance purposes.
Mandatory SPIs Certificate holders are obligated to submit, but not limited to, the following SPIs:
Part 121, 125, 135 and 141p.170
Monitored Indicators Measurement Accidents 1-Per 1,000 departures; and 2-Actual number Serious Incidents Incidents Runway Incursion Runway Excursion Stall warning or stick shaker activation Unusual Attitudes and Aircraft Upsets Unstable Approaches Below 1000 ft AGL resulted in an incident Genuine TCAS RA events Engine shutdown/flameout during flight Flight Control System Malfunction Genuine ground proximity warning system warning (GPWS/TAWS triggered) Major defects Flight Crew Fatigue Reports 1-Actual number Internal Voluntary Reports
Part 171p.171
Monitored Indicators Measurement Accidents 1-Per 1,000 Departures; and 2-Actual Number Serious Incidents Incidents AIRPROX events Runway Incursion Runway Excursion Genuine TCAS RA events, Not including False TCAS RA alerts caused by equipment malfunctions or loss of GNSS Loss of Separation Controlled Flight into Terrain Internal Voluntary Reports Actual number Internal Fatigue Reports
Part 137, 138, 139p.171
Monitored Indicators Measurement Aircraft Related Accidents 1-Per 1,000 Departures and 2-Actual Number Serious Incidents Incidents Runway Incursion Runway Excursion Internal Voluntary Reports Actual number Internal Fatigue Reports
Part 151p.172
Monitored Indicators Measurement Accidents 1-Per 1,000 departures and 2-Actual Number Serious Incidents Incidents Serious damage to aircraft during ground handling activities as applicable Aircraft weight and balance errors as applicable Internal Voluntary Reports Actual number Internal Fatigue Reports
Part 145p.172
Monitored Indicators Measurement Major Defects Per working hours Or Per 1,000 maintenance activities Serious damage to aircraft during maintenance activities Per 1,000 maintenance activities Cases of inappropriate storage of hardware or component during maintenance Internal Voluntary Reports Actual number Internal Fatigue Reports Recommended SPIs The following SPIs are not mandated, but rather to help service providers to establish their SPI’s or to improve their current SPI’s:
Recommended SPI’s Note: The following indicators are generic, and any certified organization could use any SPI that is related to their operation/organization Monitored Indicators Measurement Rejected Takeoffs Rate Per Departure Declared Emergency Wildlife Bird Birdstrike(s) – with Damage Diversion Due to Technical Exceedances of Aircraft Pitch/Roll/Bank Limits Use of Contaminated or Incorrect Type of Fuel Engine System Malfunction During Flight Incorrect or delayed readbacks of ATC runway/taxi clearances ATC or pilot deviation reports related to incorrect runway/taxiway entry Cases of runway confusion Deviation from an air traffic control clearance Deviation from intended flight path/attitude Aircraft Deviation from ATC Clearance Unauthorized Penetration of Airspace Hard Landings Deep/Long Landings High Speed Approaches Airspace Infringements Flight Crew incapacitation in flight Flight Crew Fatigue Reports Non-flight Crew Fatigue Reports Incorrect loading (Ground Handling Ramp) Dangerous Goods incidents In-Flight Loss of Control Cases of Foreign Object Damage Indicators definitions:
Other SPI’s Examples Note: The following indicators are generic, and any certified organization could use any SPI that is related to their operation/organization Monitored Indicators Description Accident An occurrence associated with the operation of an aircraft which, in the case of a manned aircraft, takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, or in the case of an unmanned aircraft, takes place between the time the aircraft is ready to move with the purpose of flight until such time as it comes to rest at the end of the flight and the primary propulsion system is shut down, in which:
a) a person is fatally or seriously injured as a result of: —being in the aircraft, or —direct contact with any part of the aircraft, including parts which have become detached from the aircraft, or —direct exposure to jet blast, except when the injuries are from natural causes, self-inflicted or inflicted by other persons, or when the injuries are to stowaways hiding outside the areas normally available to the passengers and crew; or b) the aircraft sustains damage or structural failure which:
—adversely affects the structural strength, performance or flight characteristics of the aircraft, and would normally require major repair or replacement of the affected component, except for engine failure or damage, when the damage is limited to a single engine, (including its cowlings or accessories), to propellers, wing tips, antennas, probes, vanes, tires, brakes, wheels, fairings, panels, landing gear doors, windscreens, the aircraft skin (such as small dents or puncture holes), or for minor damages to main rotor blades, tail rotor blades, landing gear, and those resulting from hail or bird strike (including holes in the radome); or c) the aircraft is missing or is completely inaccessible.
Note 1— For statistical uniformity only, an injury resulting in death within thirty days of the date of the accident is classified, by ICAO, as a fatal injury. Note 2 — An aircraft is considered to be missing when the official search has been terminated and the wreckage has not been located.
Note 3 — The type of unmanned aircraft system to be investigated is addressed in 5.1 of the ICAO Annex 13. Note 4— Guidance for the determination of aircraft damage can be found in Attachment F of the ICAO Annex 13.
Note 5— Also referred to as "aircraft accident". Serious Incident An incident involving circumstances indicating that there was a high probability of an accident and associated with the operation of an aircraft which, in the case of a manned aircraft, takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, or in the case of an unmanned aircraft, takes place between the time the aircraft is ready to move with the purpose of flight until such time as it comes to rest at the end of the flight and the primary propulsion system is shut down.
Note 1— The difference between an accident and a serious incident lies only in the result. Note 2— Examples of serious incidents can be found in the ICAO Annexure 13, Attachment C. Incident An occurrence, other than an accident, associated with the operation of an aircraft which affects or could affect the safety of operation.
Runway Incursion An event involving the incorrect presence of an aircraft, vehicle, or person on the protected area of a surface designated for the landing and takeoff of aircraft. Runway Excursion A veer off or overrun off the runway surface during either the takeoff or landing phase only.
Stall warning or stick shaker activation Stall warning triggered. Unusual Attitudes and Aircraft Upsets Aircraft upset, exceeding normal pitch attitude, bank angle, or airspeed inappropriate for the conditions.
Unstable Approaches Below 1000 ft AGL resulted in an incident An event where stabilized approach parameters are not met. Stabilized approach recommended parameters define a stabilized approach and should be met by 1,000 feet above airport elevation in IMC or 500 feet in VMC are:
1. The aircraft is on the correct flight path; 2. Only small changes in heading/pitch are required to maintain the correct flight path; 3. The aircraft speed is not more than VREF + 20 knots indicated airspeed and not less than VREF;
4. The aircraft is in the correct landing configuration; 5. Sink rate is no greater than 1,000 feet per minute; if an approach requires a sink rate greater than 1,000 feet per minute, a special briefing should be conducted;
6. Power setting is appropriate for the aircraft configuration and is not below the minimum power for approach as defined by the aircraft operating manual; 7. All briefings and checklists have been conducted;
8. Specific types of approaches are stabilized if they also fulfill the following: instrument landing system (ILS) approaches must be flown within one dot of the glide slope and localizer; a Category II or Category III ILS approach must be flown within the expanded localizer band.
Genuine TCAS RA events An event involving a loss of separation between aircraft led to TCAS RA activation with genuine airborne collision avoidance system/traffic alert and collision avoidance system warning.
Engine shutdown/flameout during flight An Engine shutdown/flameout during flight. Flight Control System Malfunction Abnormal functioning of flight controls such as asymmetric or stuck/jammed flight controls (for example: lift (flaps/slats), drag (spoilers), attitude control (ailerons, elevators, rudder) devices).
Genuine ground proximity warning system warning
(GPWS/TAWSp.176
triggered) An event involving a genuine ground proximity warning system warning where involving a near collision by an aircraft with terrain. Major defects An aircraft defect that may affect the safety of the aircraft or cause the aircraft to become a danger to persons or property; or a component defect may affect the aircraft's safety or cause a danger to persons or property if such component is fitted to an aircraft.
Rejected Takeoffs A rejected take-off as the consequence of previous events. High speed rejected take-off (at and above V1) (High speed rejected take-off) or Low speed rejected take-off (below V1) (Low speed rejected take-off).
Declared Emergency A declared emergency as the consequence of previous events. Distress (Mayday), Urgency (PAN call). Wildlife A potential for a damaging aircraft collision with wildlife on or near an aerodrome.
Bird Bird on or near an aerodrome or aircraft Birdstrike(s) – with Damage An event involving an aircraft collision with bird. Diversion Due to Technical An event involving any diversion of an aircraft from the intended destination due to technical reasons.
Exceedances of Aircraft Pitch/Roll/Bank Limits Exceedances of Aircraft Pitch/Roll/Bank Limits Use of Contaminated or Incorrect Type of Fuel Fuel supplied to the powerplant(s) is incorrect, for example, Jet A into a piston powerplant, 80 octane into a powerplant requiring 100 octane.
Engine System Malfunction During Flight Engine System Malfunction During Flight Incorrect or delayed readbacks of ATC runway/taxi clearances Incorrect or delayed readbacks of ATC runway/taxi clearances ATC or pilot deviation reports related to incorrect runway/taxiway entry ATC or pilot deviation reports related to incorrect runway/taxiway entry Cases of runway confusion Cases of runway confusion Deviation from intended flight path/attitude An event involving a deviation from intended flight path/attitude.
Aircraft Deviation from ATC Clearance An event involving a deviation from an air traffic control clearance. Unauthorized Penetration of Airspace An aircraft enters a designated airspace without the required permission from air traffic control Hard Landings An event involving a hard landing. A landing in which the vertical deceleration encountered required a hard landing check or inspection.
Long Landings The aircraft landed behind the touch-down zone. High Speed Approaches An aircraft landing approach performed at a speed greater than the aircraft's typical or recommended landing speed, often due to a lack of stabilized approach criteria, specific operational needs like gusty winds, or a misapplication of technique by the pilot.
Airspace Infringements An airspace infringement (AI) is the unauthorized entry of an aircraft into notified airspace. This includes controlled airspace, prohibited and restricted airspace, active danger areas, aerodrome traffic zones, radio mandatory zones and transponder mandatory zones.
Flight Crew incapacitation in flight An event involving a flight crew member's incapacitation. Flight Crew Fatigue Reports Fatigue report by a licensed flight crew member charged with duties essential to the operation of an aircraft during a flight duty period. The crew fatigue might impact or potentially impact their ability to perform their flight duties safely.
Non-flight Crew Fatigue Reports Fatigue report by a person other than a flight crew member charged with duties related to the operation of an aircraft during a flight duty period. The crew fatigue might impact or potentially impact their ability to perform their flight duties safely.
Incorrect loading An event involving incorrect loading of the aircraft. Dangerous Goods incidents An occurrence associated with and related to the transport of dangerous goods by air, which results in injury to a person, property or environmental damage, fire, breakage, spillage, leakage of fluid or radiation or other evidence that the integrity of the packaging has not been maintained. Any occurrence relating to the transport of dangerous goods which seriously jeopardizes the aircraft or its occupants is also deemed to constitute a dangerous goods incident. Carrying or attempted carriage of dangerous goods in contravention of applicable legislation, including incorrect labeling, packaging, and handling of dangerous goods.
In-Flight Loss of Control Loss of aircraft control while, or deviation from intended flight path, in flight. Loss of control inflight is an extreme manifestation of a deviation from intended flight paths.
The phrase “loss of control” may cover only some of the cases during which an unintended deviation occurred. Used only for airborne phases of flight in which aircraft control was lost. Foreign Object Debris damage Events of FOD Damaging the Aircraft Foreign object debris (FOD) An inanimate object within the movement area which has no operational or aeronautical function, and which has the potential to be a hazard to aircraft operations.
Serious Damage to Aircraft During Maintenance Activities Serious damage to aircraft during maintenance activities inappropriate storage of hardware or component during maintenance Inappropriate storage of hardware or component during maintenance Crew/staff not licensed for activity Crew/staff did not possess valid license for the activity undertaken.
Duty time exceeded The exceedance of the crew-duty period by a person with reference to applicable GACAR. Rest time less than required by regulation Rest time is less than required by with reference to applicable GACAR.
Inadvertent slide deployment Inadvertent deployment of emergency slide equipment. Fire/ fumes / smoke Events involving fires or smoke in the cockpit, in the passenger compartment, in cargo compartments, or engine fires Low fuel level warning Critically low fuel quantity or fuel quantity at destination below required final reserve fuel.
Incorrect loading An event involving incorrect loading of the aircraft. Loss of tail rotor effectiveness (LTE) LTE is a critical low speed aerodynamic flight condition which can result in an uncommanded rapid yaw rate which does not subside of its own accord. LTE is not related to a technical malfunction. It is the result of the tail rotor not providing adequate thrust to maintain directional control and is usually caused by either certain wind azimuths (directions) while hovering, or by an insufficient tail rotor thrust for a given power setting at higher altitudes.
Helicopter settling with power/vortex ring Settling with power: A condition of helicopter power settling, in which hover power required exceeds power available, normally resulting from an attempt to hover out of ground effect with insufficient power available to compensate for elevation, temperature, and/or humidity.
VORTEX RING STATE: An area of non-uniform and unsteady airflow around a rotating main rotor or tail rotor in which the rotor is affected by an induced velocity of airflow that approaches or exceeds the airflow being produced by the affected rotor. It is characterized by a sudden requirement for increased power and/or rotor pitch when airflow from the affected rotor is forced back through and around the rotor.
Taxiway incursion Any occurrence at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the taxiway. Flight control system malfunction Abnormal functioning of flight controls such as asymmetric or stuck/jammed flight controls (for example: lift (flaps/slats), drag (spoilers), attitude control (ailerons, elevators, rudder) devices).
Engine shutdown/flameout during flight An Engine shutdown/flameout during flight. SPI’s and SPT’s Evaluation Form Figure 2.5.b.1. SPI’s/SPT’s Evaluation Process
Section 1. Safety Risk Management (SRM) Processes and Tools.p.184
2.6.1.1. INTRODUCTION. This section describes fundamental Safety Risk Management (SRM) concepts, discusses what types of changes are evaluated for safety risk, and details the process and guidance available for determining if a change requires a complete safety analysis under SRM. SRM (SMS Component 2.0) is one of the two core operational activities under an SMS (the other being SMS Component 3.0 – Safety Assurance). The management of change (SMS Element 3.2) is also directly related to this subject. This section is being provided to supplement guidance on Component 2.0 – SRM and Element 3.2 – Management of Change that is contained in Chapters 2 and Chapter 4 concerning Safety Risk Management (SRM). The section also outlines the process of assessing and managing safety risk, including:
• Definitions of commonly used terms • Descriptions of safety analysis activities early in the planning or change proposal process • Descriptions of the evidence and documentation that indicate that the objectives have been met A. This document describes the documentation necessary for safety analyses and the required components of the documentation. In addition, it provides information on how organizations should formally document (and approve) their SRM activities and outputs, accept risk and track changes.
NOTE: See Figure 2.6.1.9 at the end of this section for a glossary of terms used throughout this chapter.
2.6.1.3. SRM OVERVIEW.p.184
A. How Change Affects Safety. Changes to any system create the potential for increased safety risk as the changes interact or interface with existing procedures, systems, or operational environments. Aviation personnel can use SRM to maintain or improve safety by identifying, managing, and mitigating the safety risk associated with all changes (e.g., changes to systems (hardware and software), equipment, and procedures) that impact safety.
B. SRM Defined. SRM is a formalized, proactive approach to system safety. SRM is a methodology applied to all changes that ensures hazards are identified, and unacceptable risk is mitigated and accepted prior to the change being made. In this context, a change could be any change to or modification of airspace; airports; aircraft; maintenance programs; pilots; air navigation facilities; air traffic control (ATC) facilities; communication, surveillance, navigation, and supporting technologies and systems; operating rules, regulations, policies, and procedures; and the people who implement, sustain, or operate the system components. It provides a framework to ensure that once a change is made, it continues to be tracked throughout its lifecycle.
1) SRM is a fundamental component of a Safety Management System (SMS). It is a systematic, explicit, and comprehensive analytical approach for managing safety risk at all levels and throughout the entire scope of an operation or the lifecycle of a system. It requires the disciplined assessment and management of safety risk.
2) The SRM process is a means to: • Document proposed changes regardless of their anticipated safety impact • Identify hazards associated with a proposed change • Assess and analyze the safety risk of identified hazards • Mitigate unacceptable safety risk and reduce the identified risks to the lowest possible level • Accept residual risks prior to change implementation • Implement the change and track hazards to resolution • Assess and monitor the effectiveness of the risk mitigation strategies throughout the lifecycle of the change • Reassess change based on the effectiveness of the mitigations C. System, Hazard, and Risk Defined. Three important terms necessary to discuss making changes to aviation-related systems, the resulting potential hazards, and the management of risk are:
1) System. A system is an integrated set of constituent pieces that are combined in an operational or support environment to accomplish a defined objective. These pieces include people, equipment, information, procedures, facilities, services, and other support services.
2) Hazard. A hazard is any real or potential condition that can cause injury, illness, or death to people; damage to or loss of a system, equipment, or property; or damage to the environment. A hazard is a condition that is a prerequisite to an accident or incident.
3) Risk. Risk is the composite of predicted severity and likelihood of the potential effect of a hazard in the worst credible system state. Severity, likelihood, and system state will be defined later in this document.
NOTE: The system safety methodology, as described in this section, addresses risk on an individual hazard-by-hazard basis and, therefore, does not address aggregate safety risk. Aviation personnel can determine risk acceptability using the risk matrix in Figure 2.6.1.8.
D. Defenses in Depth - Designing an Error Tolerant System. Given the complex interplay of human, material, and environmental factors in operations, the complete elimination of risk is an unachievable goal. Even in organizations with the best training programs and a positive safety culture, human operators will occasionally make errors; the best-designed and maintained equipment will occasionally fail. System designers take these factors into account and strive to design and implement systems that will not result in an accident due to an error or equipment failure. These systems are referred to as “error tolerant.” 1) Error Tolerant System. An error tolerant system is defined as a system designed and implemented in such a way that, to the maximum extent possible, errors and equipment failures do not result in an incident or accident.
2) Developing a Safe and Error Tolerant System. The system is required to contain multiple defenses allowing no single failure or error to result in an accident. An error tolerant system includes mechanisms that will recognize a failure or error, so that corrective action will be taken before a sequence of events leading to an accident can develop. The need for a series of defenses rather than a single defensive layer arises from the possibility that the defenses may not always operate as designed. This design philosophy is called “defenses in depth.” 3) Failures in the Defensive Layers. An operational system can create gaps in the defenses. As the operational situation or equipment serviceability states change, gaps may occur as a result of:
• Undiscovered and longstanding shortcomings in the defenses • The temporary unavailability of some elements of the system as the result of maintenance action • Equipment failure • Human error or violation 4) Design Attributes. Design attributes of an error tolerant system include:
• Making errors conspicuous (error evident systems) • Trapping the error to prevent it from affecting the system (error captive systems) • Detecting errors and providing warning and alerting systems (error alert systems) • Ensuring that there is a recovery path (error recovery systems) 5) Well-Designed System. For an accident to occur in a well-designed system, these gaps must develop in all of the defensive layers of the system at the critical time when that defense should have been capable of detecting the earlier error or failure. An illustration of how an accident event must penetrate all defensive layers is shown in Figure 2.6.1.1. This concept is commonly referred to as James Reason’s “Swiss Cheese” model.
Figure 2.6.1.1. Defenses in Depth Philosophy 6) Gaps in System Defenses. The gaps in the system’s defenses shown in Figure 2.6.1.1 are not necessarily static. Gaps “open” and “close” as the operational situation, environment, or equipment serviceability states change. A gap may sometimes be the result of nothing more than a momentary oversight on the part of a controller or operator. Other gaps may represent long-standing latent failures in the system.
7) Latent Failure. A latent failure is considered a failure that is not inherently revealed at the time it occurs. For example, in an electrically powered system, when there is a slowly degrading back-up battery that has no state-of-charge sensor, the latent failure would not be identified until the primary power source failed and the back-up battery was needed. If no maintenance procedures exist to periodically check the battery, the failure would be considered an undetected latent event.
E. Detecting Gaps. The task of reducing risk can be applied in both proactive and reactive ways. Careful analysis of a system and operational data monitoring make it possible to identify sequences of events where faults and errors (either alone or in combination) could lead to an incident or accident before it actually occurs. The same approach to analyze the chain of events that lead to an accident can also be used after the accident occurs. Identifying the active and latent failures revealed by this type of analysis enables one to take corrective action to strengthen the system’s defenses.
F. Closing Gaps. The following examples of typical defenses used in combination to close gaps are illustrative and by no means a comprehensive list of solutions: 1) Equipment: • Redundancy o Full redundancy providing the same level of functionality when operating on the alternate system o Partial redundancy resulting in some reduction in functionality (e.g., local copy of essential data from a centralized network database) • Independent checking of design and assumptions • System designed to ensure that a critical functionality is maintained in a degraded mode in the event that individual elements fail • Policies and procedures regarding maintenance, which may result in loss of some functionality in the active system or loss of redundancy • Automated aids or diagnostic processes designed to detect system failures or processing errors and report those failures appropriately • Scheduled maintenance 2) Operating Procedures:
• Adherence to standard terminology/phraseology and procedures • Confirmation of critical items in instructions • Checklists and habitual actions • Training, analyses, and reporting methods 3) Organizational Factors:
• Management commitment to safety • Current state of safety culture • Clear safety policy o Implemented with adequate funding provided for safety management activities • Oversight to ensure correct procedures are followed o No tolerance for willful violations or shortcuts • Adequate control over the activities of contracted personnel outside the organization NOTE: For developing risk mitigation controls, refer to Table 2.6.1.5.
G. Effect of Hardware and Software on Safety. System designers generally design the hardware and software components of a system to meet specified levels of reliability, maintainability, and availability. The techniques for estimating system performance in terms of these parameters are well established. When necessary, system designers can build redundancy into a system, to provide alternatives in the event of a failure of one or more elements of the system.
1) Designers use system redundancy and hardware and/or software diversity to provide service in the event of primary system failures. Different hardware and software meet the functional requirements for the back-up mode.
2) Physical diversity is another method system designers use to increase the likelihood of service availability in the event of failures. Physical diversity involves separating redundant functions so that a single point of failure does not corrupt both paths, making the service unavailable. An example of physical diversity would be to bring an electrical power supply into a system through two different locations. In the event of a fire or other issue in one location, the alternate path would still provide power, which increases the likelihood that the system would remain available.
3) When a system includes software and/or hardware, the safety analyses consider possible design errors and the hazards they may create. Systematic design processes are an integral part of detecting and eliminating design errors.
H. Human Element’s Effect on Safety. Ultimately, every system exists to assist a human in task performance. Therefore, system designers must design the human-to-the-system interface and associated procedures to capitalize on human capabilities and to compensate for human limitations.
One limitation is human performance variability, which necessitates careful and complete analysis of the potential impact of human error. Machines and systems are built to function within specific tolerances, so that identical machines have identical, or nearly identical, characteristics. By contrast, humans vary due to genetic and environmentally determined differences. Designers take these differences into account when designing products, tools, machines, and systems to “fit” the target user population. Human capabilities and attributes differ in areas such as:
• Sense modalities (manner and ability of the senses, such as seeing, hearing, and touching) Cognitive functioning • Reaction time • Physical size and shape • Physical strength 1) Fatigue, illness, and other factors such as stressors in the environment, noise, and task interruption also impact human performance. Designers use Human Error Analysis (HEA) to identify the human actions in a system that can create hazardous conditions. Optimally, the system is designed to resist human error (error resistant system) or at a minimum, to tolerate human error (error tolerant system).
2) Human error is estimated to have been a causal factor in 60 to 80 percent of aviation accidents and incidents and is directly linked with system safety, error, and risk. People make errors, which have the potential to create hazards. In addition, accidents and incidents often result from a chain of independent errors. For this reason, system designers must design safety-critical systems to eliminate as many errors as possible, minimize the effects of errors that cannot be eliminated, and lessen the negative impact of any remaining potential human errors.
3) As a general rule, “human factors” can be defined as a “multidisciplinary effort to generate and compile information about human capabilities and limitations and apply that information to equipment, systems, facilities, procedures, jobs, environments, training, staffing, and personnel management for safe, comfortable, effective human performance.” 4) Human factors are a discipline that examines the human role in a system or application (e.g., hardware, software, procedure, facility, document, other entity) and how the human is integrated into the design. Human factors applies knowledge of how humans function in terms of perception, cognition, and biomechanics to the design of tools, products, and systems that are conducive to human task performance and protective of human health and safety.
5) When examining adverse events attributed to human error, often elements of the human- to-system interface (such as display design, controls, training, workload, or manuals and documentation) are flawed. Human reliability analysis and the application of human performance knowledge must be an integral part of the SMS; affecting system design for safety-critical systems. Recognizing the critical role that humans and human error play in complex systems and applications has led to the development of the human-centered design approach. This human- centered design approach is central to the concept of managing human errors that affect safety risk.
2.6.1.5. APPLICABILITY OF SRM TO MANAGEMENT OF CHANGE.p.192
A. Items Requiring Evaluation for Safety Risk. All proposed changes (e.g., new equipment; systems; modifications to existing equipment, systems and new and/or changes to existing procedures; operations; and policies) should trigger an SRM evaluation. Figure 2.6.1.2 below provides an overview of SRM and its steps.
Figure 2.6.1.2. SRM Steps
2.6.1.7. PLANNING.p.192
A. Planning the SRM effort requires that an individual: • Decides the level and type of safety analysis that is needed • Coordinates with other organizations that may be affected by the change or the risk mitigation strategies B. The scope of the SRM effort is a function of the nature, complexity, and impact or consequence of the change. It is critical that the scope and complexity of the safety analysis match the scope and complexity of the change. To support this activity in larger organizations, the originating department should consult a Safety Engineer to determine if additional involvement from other organizations is needed.
C. It is important for the group designated as an “SRM Panel” to recognize how systems or items initially determined to have no impact on safety could potentially impact the system or change being analyzed. For instance, air conditioning may not initially appear to have an impact on the safety of a larger system; however, when that system depends on air conditioning to keep it from overheating and failing, air conditioning (or lack thereof) could impact the safety of that system, as well as the safety of the operation as a whole. Issues or potential hazards captured through the SRM process/analysis, but not directly the result of the change being assessed, must be formally passed or transferred onto the appropriate party by following the Documenting Existing Hazards process discussed in paragraph 2.6.1.15 E.
D. SRM Panel. An SRM Panel should include representatives and stakeholders from the various organizations affected by the change. It is important that the panel be made up of an appropriately diverse team, including stakeholders and experts, who will be involved, in different capacities, throughout the safety analysis process. A “stakeholder” is a group or individual that is affected by, or is in some way accountable for the outcome of, an undertaking; an interested party having a right, share, or claim in a product or service, or in its success in possessing qualities that meet that party’s needs and/or expectations.
1) Though the size and make-up of the panel will vary with the type and complexity of the proposed change, involving the following types of expertise on the SRM Panel should be considered (list not all-inclusive):
• Employees directly responsible for developing the proposed change • Employees with current knowledge of and experience with the system or change • Hardware and/or software engineering or automation expert to provide knowledge on equipment performance • SRM specialist to guide the application of the methodology • Human factors specialist • Software specialist • Systems specialist • Employees skilled in collecting and analyzing hazard and error data and using specialized tools and techniques (e.g., operations research, data, human factors, failure mode analysis) E. Panel Facilitator Responsibilities. For each SRM Panel, there should be one person who serves as the SRM Panel facilitator. The facilitator or a member of the SRM Panel collects information relevant to the change. This information may include meeting with the person who proposed the change. The change proponent must clarify the:
• Current system state or condition • Proposed change • Intent of the change • System state(s) in which the change will be conducted • Boundaries of the analysis • Assumptions that may influence the analysis 1) The SRM Panel facilitator ensures that the following occurs:
• Potential panel members are identified • Panel members have a common understanding of the SMS and SRM principles • Material required for the first meeting is gathered, including: o Preliminary Hazard Lists (PHLs) of similar changes o Collection and analysis of data appropriate to the change to assist in hazard identification and risk assessment o SRM handouts (severity and likelihood table and risk matrix, as shown in Figure 2.6.1.8.
• Panel members are aware of meeting logistics • Co-facilitator is identified (co-facilitator will later work with the facilitator and the change proponent to help prepare the final safety document) • SRM Panel orientation is prepared (i.e. why we are here, what are we trying to accomplish, what is our schedule, etc.) • Initial set of SRM Panel ground rules are developed (i.e. how the panel members will interact with each other) 2) At the initial meeting, the facilitator must present a panel orientation, including:
• Summary of the goals and objectives for the panel • Brief review of the SRM process • Development of SRM Panel ground rules • Determination of how often the SRM Panel will meet along with location, time, and date • Presentation of the proposed change with the sample PHL data and other information pertinent to the change 3) Involving panel members with varying experience and knowledge leads to a broader, more comprehensive, and more balanced consideration of safety issues than an individual assessment.
The following is a recommended process for the SRM Panel: • Individuals use the group session to generate ideas and undertake preliminary assessment only (perhaps identifying factors that are important, rather than working through the implications in detail) • A subset of the panel with sufficient breadth of expertise to understand all the issues raised and a good appreciation of the purposes of the assessment, collate and analyze the findings after the session. The person who facilitated or recorded the session often is most able to perform this task • The individuals who collate and analyze the results present them to the group to check that input has been correctly interpreted. This also gives the group a chance to reconsider any aspect once they can see the whole picture
2.6.1.9. PRELIMINARY SAFETY ANALYSIS.p.195
A. Required Levels of Safety Analysis. When proposing a change to a system, change proponents must perform a preliminary safety analysis. If the change does not affect the overall system, there is no need to conduct a further safety analysis. If the change does affect the system, a fundamental question to ask is: does the change have the potential to introduce safety risk into the overall system? Additional questions to make that determination may include:
• Does the change affect the aviation organization and GACA interaction? • Does the change affect existing processes or procedures? • Does the change represent a change in operations? • Does the change modify the form, fit, and/or function of a critical system? 1) If the change is not expected to introduce safety risk, then there is no need to conduct further safety analysis; instead, the change proponent documents that determination, along with the justification for the decision as to why the change is not subject to the provisions of additional SRM assessments and supporting documentation beyond the initial safety analysis in an SRM Decision Memo (SRMDM), described in paragraph 2.6.1.9 B. If the change is expected to impact safety, it is necessary to conduct further safety analysis and document the safety analysis in a Safety Risk Management Document (SRMD). Even when a change is proposed to improve safety, the need to conduct further safety analysis remains.
2) The level at which an organization conducts SRM varies by organization, change proponent, and/or type of change. In some cases, SRM Panels will perform SRM at the national level, and in other cases, panels will perform SRM at the regional local level. Not all changes affect or require further safety analysis.
B. SRMDM: No Safety Risk Introduced to the Civil Aviation Environment. In the early stages of analysis, it may become evident that a change does not introduce any safety risk into the civil aviation environment or for a certificate holder. In this case, there is no need to further assess the safety risk. The SRMDM can be used to document all proposed changes that do NOT introduce any safety risk (hazards) to the civil aviation environment or for a certificate holder’s operations. Such determinations can be made by the change proponent, affected departments, or an SRM Panel. The SRMDM must include a description of the proposed change and the justification for the decision that the change is not subject to the provisions of additional SRM assessments, and supporting documentation beyond the preliminary safety analysis. The justification must describe the rationale supporting the finding that the proposed change does NOT introduce any safety risk to the civil aviation environment or a certificate holder’s operations. All SRM documentation, including SRMDMs, must be kept on file throughout the lifecycle of a system or change.
1) It is recommended that an SRMDM have two signatures at a minimum, one from the change proponent and one from a designated management official of the affected organization within a certificate holder or component of the civil aviation environment as a whole. Such organizations may have additional signatory requirements as well.
2.6.1.11. WHEN FURTHER SAFETY ANALYSIS IS REQUIRED.p.197
A. SRM Safety Analysis Phases. Consistent with ICAO guidelines and best practices, the SRM phases in Figure 2.6.1.4 are equally applicable to any SRM activity, whether it pertains to operations, maintenance, procedures, or new system development. Figure 2.6.1.5 illustrates how the five phases of the SRM safety analysis are accomplished. Systematically completing these steps creates a thorough and consistent safety analysis.
Figure 2.6.1.4. SRM Safety Analysis Phases Figure 2.6.1.5. How to Accomplish a Safety Analysis B. The safety steps are closed-loop, meaning those tasked with executing SRM repeat one or more steps until the safety risk for each hazard is acceptable. Regardless of the phase of operation, these steps assist SRM practitioners in identifying and managing the safety risk associated with providing particular civil aviation services.
2.6.1.13. PHASE 1: DESCRIBE SYSTEM.p.199
A. Describing the System. A good system description is the critical foundation for conducting a sound safety analysis. The system description provides information that serves as the basis to identify all hazards and associated safety risks. It is critical that the SRM Panel members:
1) Define and document the scope and objectives of the proposed change or system. 2) Describe and model the system and operation in sufficient detail for the safety analysis to proceed to the next stage—identifying hazards (e.g., modeling might entail creating a functional flow diagram to help depict the system and the interface with the users, other systems, or sub-systems).
3) Aware that the system is always a sub-component of some larger system. For example, even if the analysis encompasses all services provided within an entire area, it can be considered a subset of a larger area, which in turn, is a subset of a larger part of the system.
B. Potential Effects on the System or Interfacing Systems. This phase considers all critical factors. The resulting description defines the scope of the risk assessment. A complete and accurate system description is the essential foundation for conducting a thorough safety analysis. System descriptions need to exhibit two essential characteristics—correctness and completeness.
• Correctness in a description means that it accurately reflects the system without ambiguity or error • Completeness means that nothing has been omitted and that everything stated is essential and appropriate to the level of detail 1) A description of the change may be a full report or a paragraph; length is not important, as long as the description covers all of the essential elements. It is vital that the description of the proposed change be correct and complete. If the description is too vague, incomplete, or otherwise unclear, it must be clarified before continuing the safety analysis. Questions to consider include:
• What is the purpose of the system or change? • How will the system or change be used? • What are the system or change functions? • What are the system or change boundaries and external interfaces? • What is the environment in which the system or change will operate? • What are the interconnectivity and/or interdependencies between systems? • How will the change impact system users? 2) The following are examples of data that the people conducting the safety analysis could consider when describing the system:
• Average volume of work products • Number of hours worked or flown • Number and type of operations • Number of aircraft controlled • Number of VFR vs. IFR hours flown • Availability and reliability for both hardware and software • Number of errors, violations, or deviations • Number of accidents or incidents • Number of worker injuries • Accident/injury data NOTE: The Safety Assurance of an SMS can be used to provide potential sources of data to be used in an SRM.
C. 5M Model of System Description. SRM Panels can use a variety of methods to create a system description. The 5M Model shown in Figure 2.6.1.6 is one useful method to capture the information needed to describe the system.
Figure 2.6.1.6. 5M Model 1) The 5M Model illustrates five integrated elements in any system: • Mission. The functions that the system needs to perform • Man/Person. The human operators and maintainers • Machine. The equipment used in the system including hardware, firmware, software, human-to-system interface, and avionics • Management. The procedures and policies that govern the system’s behavior • Media. The environment in which the system is operated and maintained 2) The 5M Model and similar techniques are used to deconstruct the proposed change to distinguish elements that are part of, or impacted by, the proposed change. These elements will later help to identify sources, causes, hazards, as well as current (and proposed) hazard mitigations.
D. Bounding the System: Limit Analysis to Scope of the Change. Bounding means limiting the analysis of the change or system to the elements that affect or interact with each other to accomplish the central function. The level of detail in the description varies, typically proportionally to the breadth of the change. The system description has both breadth and depth. Breadth refers to the system boundaries, and depth refers to the level of detail in the description. A thorough system description and the elements within it constitute the potential sources of hazards associated with the proposed change. This is critical to the subsequent phases of the SRM process.
1) The resulting bounded system description limits the analysis to the components necessary to adequately assess the safety risk associated with the change. E. Required Depth and Breadth of the Analysis. The depth and breadth of the analysis necessary for SRM varies. Some of the factors used to determine the depth and breadth of the analysis include:
• The Size and Complexity of the Change under Consideration. A larger and more complex change may also require a larger and more complex analysis. • The Breadth of a Change. SRM scope can be expected to increase if the change spans more than one organization, or department within an organization.
• The Type of Change. Procedural- or equipment-driven changes tend to require more analysis than a frequency change. 1) Selecting the appropriate scope and detail of the safety analysis is critical. The SRM Panel takes multiple factors into consideration when making these determinations. In general, safety analyses on more complex and far-reaching changes will require a greater scope and detail. For example, a major acquisition program could require multiple safety analyses involving hundreds of pages of data at the preliminary, sub-system, and system levels, evaluating numerous interfaces with other systems, operators, and maintainers. However, an operational procedure change at a lower level within a particular organization may require a less intensive analysis that describes the change and identifies the hazards and associated risks. In both cases, the SRM requirements are met, but the safety analysis is tailored to meet the needs of the decision-makers.
“A primary consideration in determining both the scope and detail of the safety analysis”. In other words, what information is required to know enough about the change, the associated hazards, and each hazard’s associated risk to choose which controls to implement and whether to accept the risk of the change. The scope of the analysis enables the making of informed decisions about whether the proposed change is acceptable for implementation from a safety perspective. If there is doubt about whether to include a specific element in the analysis, it is better if the panel includes that item at first, even though it might prove irrelevant during the hazard identification phase.
2) Guidelines to help determine the scope of the SRM effort include: • Sufficient understanding of system boundaries to encompass possible impacts the system could have, including interfaces with peer systems, larger systems of which it is a component, and users and maintainers • System elements • Limiting the system to those elements that affect or interact with each other to accomplish the mission or function 3) At a minimum, the safety analysis should detail the system and its hazards so that the projected audience can completely understand the associated safety risk. Guidelines that help determine depth include:
• More complex and/or increased quantity of functions will increase the number of hazards and related causes • Complex and detailed analyses will explore multiple levels of hazard causes, sometimes in multiple safety analyses • Hazards that are suspected to have associated initial high or medium risk should be thoroughly analyzed for causal factors and likelihood • The analysis should be conducted at a level that can be measured or evaluated
2.6.1.15. PHASE 2: IDENTIFY HAZARDS.p.203
A. Identifying Hazards. Once the SRM Panel has completely and accurately described the system (Phase 1), it can identify hazards. A “hazard” is defined as any real or potential condition that can result in injury, illness, or death to people; damage to or loss of a system, equipment, or property; or damage to the environment. A hazard is a condition that is a prerequisite to an accident or incident.
1) A thorough system description and the elements within it constitute the potential sources of hazards associated with the proposed change. During the hazard identification phase, the panel identifies and documents potential safety issues, their possible causes, and corresponding effects. The level of detail required in the hazard identification process depends on the complexity of the change being considered and the stage at which the SRM Panel is performing the analysis. A more comprehensive hazard identification process leads to a more rigorous safety analysis.
B. Elements of Hazard Identification. In the “identify hazards phase,” the SRM Panel identifies hazards to the system (i.e., operation, equipment, and/or procedure) in a systematic way. There are numerous ways to do this, but all require at least three elements:
• Operational expertise • Training or experience in various hazard analysis techniques • A defined hazard analysis tool C. The SRM Panel defines the data sources and measures necessary to identify hazards and to monitor for compliance with mitigation strategies. Data monitoring also helps detect hazards that are more frequent or more severe than expected or mitigation strategies that are less effective than expected. Whoever performs the hazard analysis selects the tool that is most appropriate for the type of system being evaluated. Table 2.6.1.1 in Paragraph 2.6.1.15. G, lists several hazard identification and analysis tools and techniques with descriptions and references. These are just some of the many tools that panels can use to identify hazards.
D. Potential Sources of Hazards. The hazard identification stage considers all of the possible sources of hazards. Depending on the nature and size of the system under consideration, these could include:
• Equipment (hardware and software) • Operating environment (including physical conditions, airspace, and air route design) • Human operators • Human-machine interface • Operational procedures • Maintenance procedures • External services 1) The SRM Panel should refer to the system description it created using the 5M Model or other technique. These elements are often the sources for hazards.
E. Documenting Existing Hazards. The “Documenting Existing Hazards” Process describes the documentation and notification actions required when an existing hazard is identified. During Phase 2 of the SRM process, the SRM Panel or change proponent identifies hazards for the system change undergoing the analysis. Those hazards fall into three categories:
• Pre-existing hazards not in scope and not caused by the change • Pre-existing hazards in scope and not caused by the change • Hazards in scope and caused by the change NOTE: Each of these three categories above follows a specific process for ensuring ownership, documentation, and monitoring.
1) The overall objective of any SMS is to improve aviation safety. There may be instances in which a panel discovers existing high-risk hazards through an assurance program, a safety analysis, or other means. In those cases, corrective action is necessary to resolve the identified issue. If the panel is unable to find a corrective action that will meet the requirements for acceptable risk under SRM, it must prove that the corrective action either increases the safety of the system or reduces the safety risk in the system. The panel recommends the corrective action.
The implementing party continues to work toward identification of a corrective action that meets the SRM requirements and/or continues to work toward managing the risk down to an acceptable level on the implemented change. This applies to existing hazards only. Likewise, if an SRM Panel identifies existing high-risk hazards in a system, corrective action is necessary.
No one should be allowed to introduce new high risk as the result of implementing a new change to a system. F. Causes, System State, and Effect Defined. During the hazard identification phase, the panel identifies and documents potential safety issues, their possible causes, the conditions under which hazards might be realized (system state), and corresponding effects. “Causes” are events that result in a hazard or failure, which can occur independently or in combinations. They include, but are not limited to:
• Human error • Latent errors • Design flaws • Component failure • Software errors 1) A “system state” is defined as the expression of the various conditions, characterized by quantities or qualities in which a system can exist.
2) It is important to capture the system state that most exposes a hazard. The system description remains within the confines of any operational conditions and assumptions defined in existing documentation. System state can be described using one or some combination of the following terms:
• Operational and Procedural—types of operations • Conditional—Instrument Meteorological Conditions vs. Visual Meteorological Conditions, peak vs. low work volumes, etc. • Physical—Environmental effects, primary power source vs. back-up power sources, dry vs. contaminated runways, etc.
3) Any given hazard may have a different risk level in a different system state. Hazard assessment must consider all possibilities, from the least to the most likely, allowing for “worst case” conditions. It is important to capture all system states to identify worst credible outcomes and unique mitigations. The SRM Panel must ensure that the hazards to be included in the final analysis are “credible” hazards considering all applicable existing controls. They can use the following definitions as a guide in making such decisions:
• Worst —The most unfavorable conditions expected (e.g., extremely high levels of work, extreme weather disruption) • Credible —Implies that it is reasonable to expect the assumed combination of extreme conditions will occur within the operational lifetime of the change 4) The goal of the safety analysis is to define appropriate mitigations for all risks associated with each hazard. While the worst credible outcome may produce the highest risk, the likelihood of the worst credible outcome is often very low. However, a less severe outcome may occur more frequently and result in a higher risk than the worst effect. The mitigations for the two outcomes may be different and both must be identified. It is important for the panel to consider all possible outcomes in order to identify the highest risk and develop effective mitigations for each unique outcome.
5) The SRM Panel should consider identifying the accumulation of “minor” failures or errors that result in hazards with greater severity or likelihood than would result if the panel considered each failure or error independently.
6) The effect is a description of the potential outcome or harm of the hazard if it occurs in the defined system state. 7) The Bow-Tie Model in Figure 2.6.1.7 illustrates the relationship between causes, hazards, and what kind of environment (system state) enables their propagation into the different effects.
While it may be used in conducting a safety analysis, the Bow-Tie model is included here as a means to conceptualize safety risk associated with hazards under various conditions. This model assumes each hazard can be represented by one or many causes, having the potential to lead to one or many effects (incidents or events) in various system states.
Figure 2.6.1.7. The Bow-Tie Model 8) The Bow-Tie model is a structured approach in which causes of hazards are directly linked to possible outcomes or effects in a single diagram. The underlying analysis can be simple or complex depending on what is appropriate for the change being analyzed.
9) For each effect associated with the hazard, one assigns a severity. To understand a hazard’s severity, one determines the hazard’s cause and the circumstances under which it occurred (e.g., the system state). The same model can be used to help determine the likelihoods associated with the different effects that are the result of a particular hazard given the outlined system states.
Paragraph 2.6.1.17, A-E describes severity and likelihood determinations in further detail. G. Tools and Techniques for Hazard Identification and Analysis. The following tools and techniques can be helpful in identifying and analyzing hazards. In many cases, using a single tool or technique will suffice. However, some cases may require multiple tools and techniques. Safety Engineers can provide additional guidance on which tool(s) to use for various types of changes.
1) Table 2.6.1.1 describes a selection of hazard identification and analysis tools and techniques.
Appendix C, “Hazard Identification and Analysis Tools and Techniques,” provides morep.208
detailed information about the utility and use of tools and techniques. Table 2.6.1.1. Selection of Hazard Identification and Analysis Tools and Techniques Tool or Technique Summary Description Preliminary Hazard Analysis (PHA) The PHA provides an initial overview of the hazards present in the overall flow of the operation. It provides a hazard assessment that is broad, but usually not deep.
Operational Safety Assessment (OSA) The OSA is a development tool based on the assessment of hazard severity. It establishes how safety requirements are to be allocated between air and ground components and how performance and interoperability requirements might be influenced.
Fault Hazard Analysis (FHA) The FHA is a deductive method of analysis that personnel can use exclusively as a qualitative analysis or, if desired, can expand to a quantitative one. The FHA requires a detailed investigation of the subsystems to determine component hazard modes, causes of these hazards, and resultant effects on the subsystem and its operation.
What-If Analysis The What-If Analysis methodology identifies hazards, hazardous situations, or specific accident events that could produce an undesirable consequence. One can use the What-If Analysis as a brainstorming method.
Tool or Technique Summary Description Scenario Analysis The Scenario Analysis identifies and corrects potentially hazardous situations by postulating accident scenarios in cases where it is credible and physically logical.
Change Analysis The Change Analysis analyzes the hazard implications of either planned or incremental changes (e.g., operation, equipment, or procedure). Interface Analysis One uses the Interface Analysis to discover the hazardous linkages between interfacing systems.
Job Safety Analysis (JSA) One uses this technique to assess in detail the safety considerations in a single job or task. Job Task Analyses (JTA) The foundation of the performance of HEA is a task analysis, which describes each human task/sub-task within a system in terms of the perceptual (information intake), cognitive (information processing and decision making), and manual (motor) behaviors required of an operator, maintainer, or support person. It should also identify the skills and information required to complete the tasks; equipment requirements; the task setting; time and accuracy requirements; and the probable human errors and consequences of these errors. There are several tools and techniques for performing task analyses, depending on the level of analysis needed.
H. Tool Selection Criteria. Some considerations to take into account when selecting hazard identification/analysis tools include: 1) The necessary information and its availability 2) The timeliness of the necessary information and the amount of time required to conduct the analysis 3) The tool that will provide the appropriate systematic approach to:
• Identifying the greatest number of relevant hazards • Identifying the causes of the hazards • Predicting the effects associated with the hazards • Assisting in recommending/identifying effective risk mitigations
2.6.1.17. PHASE 3: ANALYZE RISK.p.210
A. Analyzing Risk. In this phase, the SRM Panel: • Evaluates each hazard (from Phase 2) and the system state in which it potentially exists (from Phases 1 and 2) to determine what controls exist to prevent or reduce the hazard’s occurrence or effect(s) • Compares a system and/or sub-system, performing its intended function in anticipated operational environments, to those events or conditions that would reduce system operability or service 1) These events may, if not mitigated, continue until total system degradation and/or failure occurs. These mitigations are called existing controls. Once the SRM Panel documents the existing controls, it estimates the hazard’s risk.
2) An accident rarely results from a single failure or event. Consequently, risk analysis is often not a single binary (on/off, open/close, break/operate) analytical look. While they may result in the simple approach, risk and hazard analyses are also capable of looking into degrees of event analysis or the potential failure resulting from degrading events that may be complex and involve primary, secondary, or even tertiary events.
3) “Risk” is defined as the composite of predicted severity and likelihood of the potential effect of a hazard in the worst credible system state. The SRM Panel can use quantitative or qualitative methods to determine the risk, depending on the application and the rigor it uses to analyze and characterize the risk. Different failure modes of the system(s) can impact both severity and likelihood in unique ways.
B. Existing Controls. In this phase, the SRM Panel evaluates each hazard and the system context in which the hazard potentially exists to determine what prevents or reduces the hazard’s occurrence or mitigates its effects. These mitigations are called existing controls. A control can only be considered existing if it has been validated and verified with objective evidence. Until it is validated, it is considered a recommended requirement.
1) It is important to document existing controls as the panel’s understanding of existing controls impacts its ability to establish credible severity and likelihood determinations. When identifying existing controls, the SRM Panel takes credit for controls specific to the change, hazard, and system state.
C. Determining Severity. “Severity” is the measure of how bad the results of an event are predicted to be. One determines severity by the worst credible outcome. The SRM Panel must examine all effects and consider the worst credible severity. One does not consider likelihood when determining severity; determination of severity is independent of likelihood. The goal of the safety analysis is to define appropriate mitigations for all risks associated with each hazard. While the worst credible outcome may produce the highest risk, the likelihood of the worst credible outcome is often very low. However, a less severe outcome may occur more frequently and result in a higher risk than the worst effect. The mitigations for the two outcomes may be different and both must be identified. It is important for the panel to consider all possible outcomes in order to identify the highest risk and develop effective mitigations for each unique outcome.
D. Likelihood and Risk Assessment. “Risk” is the composite of predicted severity and likelihood of the potential effect of a hazard in the worst credible system state; likelihood is an expression of how often one expects an event to occur.
1) One must consider severity in conjunction with the determination of likelihood. Likelihood is determined by how often one can expect the resulting harm to occur at the worst credible severity. Table 2.6.1.2 shows likelihood definitions.
2) The SRM Panel uses likelihood definitions (in the first three columns) when acquiring new or modifying existing systems. Flight Procedures definitions (in the sixth column) can be used when assessing flight procedures. Safety professionals can use the likelihood definitions for both Systems and Flight Procedures prior to the development and implementation of the SMS.
E. Use of Qualitative and Quantitative Data. In assessing risk, one can use both quantitative and qualitative methods. Using quantitative data is preferred, as it tends to be more objective; however, when quantitative data are not available, it is acceptable to rely on qualitative data and expert judgment. Qualitative judgment varies from person to person, so if only one person is performing the analysis, the result should be considered an opinion. With a team of experts involved in the analysis, one can consider the result qualitative data or expert judgment.
1) Characteristics of quantitative data include: • Data are expressed as a quantity, number, or amount • Data tend to be more objective • Data allow for more rational analysis and substantiation of findings • Modeling 2) Modeling techniques, such as event-tree analysis, permit either statistical or judgmental inputs. If modeling is required and data are available, the risk assessment should be based on statistical or observational data (e.g., radar tracks, hours flown, labor hours, etc.). Where there is insufficient data to construct purely statistical assessments of risk, judgmental inputs can be used but they should be quantitative. For example, the true rate of a particular type of activity may be unknown, but can be estimated using judgmental input. In all cases, quantitative measures should take into consideration the fact that historical data may not represent future operating environments. In such cases, some adjustment to the input data may be required.
3) Characteristics of qualitative data include: • Data are expressed as a measure of quality • Data are subjective • Data allow for examination of subjects that can often not be expressed with numbers but by expert judgment Table 2.6.1.2. Likelihood Definitions using Quantitative Data Quantitative Frequent Probability of occurrence per operation/operational hour is equal to or greater than 1x10-3 Probable Probability of occurrence per operation/operational hour is less than 1x10-3, but equal to or greater than 1x10-5 Remote Probability of occurrence per operation/operational hour is less than 1x10-5, but equal to or greater than 1x10-7 Extremely Remote Probability of occurrence per operation/operational hour is less than 1x10-7, but equal to or greater than 1x10-9 Extremely Improbable Probability of occurrence per operation/operational hour is less than 1x10-9
2.6.1.19. PHASE 4: ASSESS RISK.p.212
A. Assessing Risk. In this phase, the SRM Panel: • Compares each hazard’s associated risk (as identified in Phase 3) and plots the risks on a pre-planned risk acceptability matrix • Determines a hazard’s priority by the location of its associated safety risk on this risk matrix • Gives higher priority hazards the greatest attention in the treatment of risk B. Risk Matrix Definition. A risk matrix is a graphical means of determining risk levels. The rows in the matrix reflect previously introduced severity categories, and its columns reflect previously introduced likelihood categories. The SRM Panel assesses risk by using the risk matrix in Figure 2.6.1.8.
1) The risk levels used in the matrix are defined as: a) “High”—Unacceptable risk; change cannot be implemented unless the hazard’s associated risk is mitigated so that risk is reduced to a medium or low level. Tracking, monitoring, and management are required. Hazards with catastrophic effects that are caused by: (1) single point events or failures, (2) common cause events or failures, or (3) undetectable latent events in combination with single point or common cause events, are considered high risk, even if the possibility of occurrence is extremely improbable.
b) “Medium”—Acceptable risk; minimum acceptable safety objective; change may be implemented, but tracking, monitoring, and management are required. c) “Low”—Acceptable without restriction or limitation; hazards are not required to be actively managed but must be documented.
2) A catastrophic severity and corresponding extremely improbable likelihood qualify as medium risk, as long as the effect is not the result of a single point or common cause failure. If the cause is a single point or common cause failure, the effect of the hazard is categorized as high risk and placed in the red part of the split cell in the bottom right corner of the matrix.
3) A “single point failure” is defined as a failure of an item that would result in the failure of the system and is not compensated for by redundancy or an alternative operational procedure. An example of a single point failure is a system with redundant hardware, in which both pieces of hardware rely on the same battery for power. In this case, if the battery fails, the system will fail.
4) A “common cause failure” is defined as a single fault resulting in the corresponding failure of multiple components. An example of a common cause failure is redundant computers running on the same software, which is susceptible to the same software bugs.
5) The risk index and recommended actions are defined as: Figure 2.6.1.8. Risk Matrix C. Types of Risk. 1) Initial Risk. Initial risk is the composite of the severity and likelihood of a hazard considering only verified controls and documented assumptions for a given system state. It describes the risk at the preliminary or beginning stage of a proposed change, program or assessment.
2) Current Risk. Current risk is the predicted severity and likelihood of a hazard at the current time. When determining current risk, both validated controls and verified controls may be used in the risk assessment. Current risk may change based on the actions taken by the decision-maker that relate to the validation and/or verification of the controls associated with a hazard.
3) Predicted Residual Risk. Predicted residual risk is the term used until the safety analysis is complete and all safety requirements have been verified. Predicted residual risk is based on the assumption that all safety requirements will be validated and verified.
4) Residual Risk. Residual risk is the risk that remains after all control techniques have been implemented or exhausted and all controls have been verified. Only verified controls can be used to assess residual risk.
D. Ranking and Prioritizing Risk for Each Hazard. The SRM Panel follows these guidelines in ranking and prioritizing risk for each hazard: 1) Rank hazards according to the severity and the likelihood of their associated risk (illustrated by where they fall on the risk matrix).
2) To plot a hazard on the risk matrix, select the appropriate severity and move down to the appropriate likelihood row. 3) Plot the hazard in the box where the severity and likelihood of the effect associated with the hazard meet.
4) If this box is red, the risk associated with the hazard is high; if the box is yellow, the risk associated with the hazard is medium; and if the box is green, the risk associated with the hazard is low.
NOTE: Rank the risks associated with the identified hazards prioritizes treatment and mitigation. High-risk outcomes must be mitigated before the proposed change can be implemented. E. Handling High Risk Hazards. When a High Risk Hazard (HRH) is identified by an SRM Panel or change proponent, the proposed change cannot be implemented until the following conditions have been met:
• The HRH is mitigated to an acceptable level of risk (medium or low) • The risk is accepted • The mitigations are approved by upper management
2.6.1.21. PHASE 5: TREAT RISK.p.215
A. Treating Risk. In this phase, the SRM Panel develops and manages options to deal with risk (from Phase 4). Effectively treating risk involves: • Identifying feasible mitigation options • Developing a risk treatment plan accepting the predicted residual risk • Developing a monitoring plan detailing review cycles for evaluating the effectiveness of mitigations • Implementing and verifying the mitigations • Monitoring the effectiveness of the mitigation 1) In the treat risk phase, the SRM Panel develops alternative strategies for managing the risk associated with a hazard. These strategies become actions that reduce the risk of the hazard’s effects on the system (e.g., human interface, operation, equipment, procedures). While the SRM Panel develops options to mitigate risk, it is the responsibility of the organization(s) making proposed change to implement and verify the mitigations, as well as monitor their effectiveness.
B. Risk Mitigation Definition. Risk mitigation is taking action to reduce the risk of the hazard’s effects. Examples of risk mitigation include: • Revising the system design • Modifying operational procedures • Establishing contingency arrangements 1) When risk is determined to be unacceptable, the SRM Panel identifies and evaluates risk mitigation measures that would reduce the risk to an acceptable level. Once identified, the SRM Panel assesses how the proposed mitigation measures affect the overall risk. If necessary, the team repeats the process until a combination of measures reduces the risk to an acceptable level.
2) When risk mitigation strategies cross organizational boundaries, those stakeholder organizations should approve documentation and accept risk in accordance with Table 2.6.1.3 and Table 2.6.1.4. 3) If the risk does not meet the predetermined acceptability criteria, it must always be reduced to a level that is acceptable, using appropriate mitigation procedures to implement the change.
Even when the risk is classified as acceptable, if any measures could further reduce the risk, the appropriate party should: • Make an effort to implement these measures, if feasible • Consider the technical feasibility of further reducing the risk • Evaluate all such cases individually 4) Remember that when an individual or organization “accepts” a risk, it does not mean that the risk is eliminated. Some level of risk remains; however, the individual or organization has accepted that the predicted residual risk is sufficiently low to a degree that it is outweighed by the benefits.
5) If SRM Panel members identify systemic hazards, then the impacted managers can identify and implement risk mitigation efforts. Managers should also assess proposed mitigations for possible collateral system impacts and initiate appropriate corrective actions.
C. Risk Mitigation Strategies. Risk mitigation normally requires the appropriate management’s informed decision to approve, fund, schedule, and implement one, or more, risk mitigation strategies. The objective of this phase is to implement appropriate plans to mitigate the risk associated with identified hazards and their effects. The SRM Panel develops, documents, and recommends appropriate risk mitigation strategies. The risk mitigation approach selected may fall into one or more of the following categories:
• Risk avoidance strategy • Risk transfer strategy • Risk assumption strategy • Risk control strategy 1) Once the SRM Panel selects and develops risk mitigation strategies, the appropriate management can identify the impact on other organization(s) and coordinate/obtain agreement on those strategies with the affected organization(s). In addition, the SRM Panel establishes a monitoring plan to ensure that risk mitigation strategies are effective. It repeats the risk mitigation process until risk is reduced to an acceptable level.
2) Hazard tracking is a key element of this risk management phase. Section 1.11.11 provides further detail on hazard tracking. D. Risk Avoidance Strategy. The risk avoidance strategy averts the potential of occurrence and/or consequence by selecting a different approach or by not participating in the operation, procedure, or system (hardware and software) development. SRM Panels may pursue this technique when multiple alternatives or options are available.
1) The risk avoidance strategy is more likely used as the basis for a “go” or “no-go” decision at the start of an operation or program. The avoidance of risk is from the perspective of the overall organization. Thus, an avoidance strategy is one that involves all the stakeholders associated with the proposed change.
E. Risk Transfer Strategy. The risk transfer strategy shifts the ownership of risk to another party. Organizations transfer risk primarily to assign ownership to the organization or operation most capable of managing it. The receiving party must then accept the risk, which must be documented using a Letter of Agreement, Statement of Agreement, Memorandum of Agreement, or other type of document. Examples of risk transfer may include:
• Transfer of responsibility for a function from one party to another • Development of new policies or procedures to change “ownership” of a particular element to a more appropriate organization • Contract procurement for specialized tasks from more appropriate sources (e.g., contract maintenance) • Transfer of systems from the operating organization to an organization that provides services 1) The receiving organization may be better equipped to mitigate the risk at the operational or organizational level. Transfer of risk, while theoretically an acceptable means of mitigating risk, cannot be the only method used to treat high risk associated with a hazard. The SRM Panel must still mitigate the safety risk to medium or low levels before it can be accepted.
2) In addition, when hazards (and associated risks) that are outside the scope of the SMS are identified (e.g., occupational safety, physical, and information security), organizations transfer the management and mitigation of these risks to the appropriate organization.
F. Risk Assumption Strategy. The risk assumption strategy is simply accepting the likelihood or probability and the consequences associated with a risk’s occurrence. It is not acceptable to use an assumption strategy to treat high risk associated with a hazard. The safety risk must still be reduced to medium or low before it can be accepted, as required by SRM documented in this manual.
G. Risk Control Strategy. A control is anything that mitigates the risk of a hazard’s effects. A control is the same as a safety requirement. All controls must be written in requirement language. 1) A risk control strategy helps to develop options and alternatives and take actions that lower or eliminate the risk. Examples include implementing additional policies or procedures, developing redundant systems and/or components, and using alternate sources of production.
When this is done, it becomes a safety requirement. A correct requirement is unambiguous and verifiable. Controls can be complex or simple. H. Safety Order of Precedence. There is a preferred order for the development of risk mitigation controls:
• Design for minimum risk • Incorporate safety devices • Provide warning • Develop procedures and training 1) Safety professionals use these in relation to system (hardware/software) development and modification. Table 2.6.1.3 shows the safety order of precedence, which reflects this order.
Table 2.6.1.3. Safety Order of Precedence Description Priority Definition Example Design for minimum risk Design the system (e.g., operation, procedure, human-to-system interface, or equipment) to eliminate risks. If the identified risk cannot be eliminated, reduce it to an acceptable level by selecting alternatives.
1. If a collision hazard exists because of a transition to a higher Minimum En Route Altitude at a crossing point, moving the crossing point to another location would eliminate the risk. 2. If “loss of power” is a hazard to a system, adding a second independent power source reduces the likelihood of the “loss of power” hazard.
Description Priority Definition Example Incorporate safety devices If identified risks cannot be eliminated through alternative selection, reduce the risk by using fixed, automatic, or other safety features or devices and make provisions for periodic functional checks of safety devices.
1. An automatic “low altitude” detector in a surveillance system 2. Interlocks to prevent exposure to radiation or high voltage 3. Automatic engine restart logic Provide warning When neither alternatives nor safety devices can effectively eliminate or adequately reduce risk, warning devices or procedures are used to detect the condition and to produce an adequate warning. The warning must be provided in time to avert the hazard’s effects.
Warnings and their applications are designed to minimize the likelihood of inappropriate human reaction and response. 1. A warning displayed on an operator’s panel 2. “Engine Failure” light in a helicopter 3. Flashing Minimum Safe Altitude Warning or Conflict Alert Indicator on a radar screen Description Priority Definition Example Develop procedures and training Where it is impractical to eliminate risks through alternative selection, safety features, and warning devices, procedures and training are used. However, management must concur when procedures and training are solely applied to reduce risks of catastrophic or hazardous severity.
1. A missed approach procedure 2. Training in stall/spin recovery 3. Procedure to vector an aircraft above a Minimum Safe Altitude on a Very High Frequency Omni-directional Range airway 4. Procedures for loss of communications I. Risk Not Sufficiently Reduced. If the risk cannot be reduced to an acceptable level after attempting all possible mitigation measures, then the change does not satisfy the safety requirements.
Therefore, the change proponent must either revise the original objectives or abandon the proposed change. If the proposal is unacceptable, the change cannot be implemented. This conclusion must be included in the SRMD.
J. Hazard Tracking. Hazard tracking is a dynamic process in which hazards and their associated safety risk information and safety requirements are entered into a database. The information is updated throughout the lifecycle of a system or change. Hazard tracking, in part, includes documenting safety requirements, providing the status of requirements validation and verification, verifying implementation, and updating the current and predicted residual risk levels before acceptance. Hazard tracking also assesses the effectiveness of existing and recommended safety requirements in the control of the identified hazards. The purpose of hazard tracking and risk resolution is to ensure a closed-loop process of managing safety hazards and risks.
1) A useful practice is to use a restricted access, web-based system to document all hazards and their associated risk information. All departments within an organization can then use a hazard tracking system provided by the organization to capture all safety hazards. In this manner, organizations formally identify all hazards, and track and monitor all initial medium and high risk hazards for the lifecycle of the system or change, or until they mitigate the risk to low (as defined in Section 1.10.2). Organizations can also verify the effectiveness of the controls mitigating all risks through continuous monitoring. If through SRM processes and/or safety assurance measures the mitigations are found ineffective in reducing the risk to an acceptable level, the change proponent and/or SRM Panel should reassess the risk and implement additional mitigations until further monitoring illustrates the risk is mitigated to low. Hazards with low associated risk by definition can be considered to meet the organization’s safety requirements for target level and may not require further mitigation.
2) A key principle of the SMS is that SRM and safety assurance are integrated. Through the SRM process, an organization can develop safety risk mitigations and monitoring plans. Through safety assurance processes, the organization monitors those mitigations and identifies new hazards or necessary changes, which must go through the SRM process. Hazard tracking is a means to ensure that these two SMS components function together to manage safety risk.
K. Training and Access to HTS. It is a good practice to use a Hazard Tracking System (HTS) to track hazards. An HTS can be a secure web site housed behind a firewall. In some systems, there are two separate HTS interfaces – one for systems acquisitions/engineering and one for operations. Company employees often can obtain access to, or training on, such a system by contacting their department’s Safety Manager or Safety Engineer.
L. Developing a Control Implementation/Monitoring Plan. In addition to tracking the hazards, the SRM Panel develops a plan to: • Verify the risk mitigations • Monitor the effectiveness of those mitigations • Conduct the post-implementation assessments to verify the results of the analysis 1) These actions are part of the treat risk phase of the safety analysis. A sample Recommended Control Implementation/Monitoring Plan is shown in Table 2.6.1.4.
Table 2.6.1.4. Sample Recommended Control Implementation/Monitoring Plan Task Responsible Due Date/ Frequency Status Implementation of Controls Task Responsible Due Date/ Frequency Status The recommended mitigation that was designed for the change Individual, division, or organization required to render account concerning the identified task The date by which the responsible party must have completed the identified task The state of the task Example: Safety device X will be installed in Equipment Z.
Example: Equipment Technicians Example: December 5, 2011 Example: Open* Monitoring A function to be performed; an objective Individual, division, or organization required to render account concerning the identified task The frequency that the task will be performed The state of the task Example: Internal audit of the maintenance records Example: Quality Assurance Office Example: Monthly, quarterly, etc.
Example: Ongoing*, Closed NOTE: * “Open” meaning that the due date of the task has not arrived; “Closed” meaning that the task has been completed (generally one would want to include the date of task completion).
Sometimes the task is considered to be “Ongoing”, meaning that the task is to be performed throughout the lifecycle of the system. 2) It is normally required that employees formally monitor all initial medium and high risk hazards for the lifecycle of the system or change, or until they mitigate the risk to low (as defined in Section 1. 10.2) and verify the effectiveness of the controls mitigating the risk. After mitigations have been verified through monitoring and a target level of risk has been achieved, the change proponent can continue current/existing monitoring and evaluation processes, so that the change becomes the standard operating procedure.
3) Safety professionals conduct post-implementation assessments for the life of the system or change, as defined in the SRMD monitoring plan. The frequency of assessments depends on the type, the potential safety impact, and/or the complexity of the change, as well as the depth and breadth of the original analysis. Inclusive in these assessments is updating the SRMD; existing support mechanisms should be considered. These support mechanisms may include Independent Operational Test and Evaluation groups, Flight Inspection departments, an Air Traffic Evaluation and Auditing Program, and SRM audits.
2.6.1.23. SAFETY RISK MANAGEMENT DOCUMENT (SRMD).p.224
A. SRMD: Tool for Decision Making. An SRMD thoroughly describes the safety analysis for a proposed change. It documents the evidence to support whether the proposed change to the system is acceptable from a safety risk perspective. The SRMD also contributes (from a programmatic or management perspective) to the decision to implement a change. The department responsible for implementing the change maintains all documentation associated with the SRM process, including the SRMD, for the lifecycle of the system or change.
1) The SRMD is a living document that may be modified during the lifecycle of the program.
Section 1.13.9 discusses this further.p.224
B. SRMD Contents. An SRMD provides sufficient detail about a proposed change to a current system or the introduction of a completely new system into an operation or larger overall system. It should be a single source that enables the management personnel to understand the change, its associated risks, and corrective steps taken (or proposed) to reduce the initial and subsequent residual risks to an acceptable level. The document must stand alone (i.e., it must contain sufficient detail about the current or proposed system to enable the reader to comprehend what steps have been taken to identify safety issues and the corrective steps taken (or proposed)).
1) An SRMD contains, at a minimum: a) Identification of the system to be introduced or changed, including: • A description of the current system and proposed change or introduction • Current controls in place • Pertinent interfaces and support systems required by the introduction and/or change to function properly • Reference to any SRMDs submitted on the current system or changes being analyzed • A statement reflecting the impact of the change or introduction (local, regional, national, etc.) b) Identification of hazards and causal factors • Description of methodology and tools used • Existing controls affected by the introduction and/or change proposed • The hazards and scenarios and/or circumstances where they exist c) Analysis, assessment, and mitigation of the associated risks • Documentation of the identified risks including: Initial risk level (in terms of severity and likelihood), when and how they appear in the current or proposed system If associated with existing risks and/or controls, and how the introduction of a new system or change in the existing system affects the risk • Controls (mitigations) and their effect on identified risks • Predicted residual and accepted risks • Documentation of how the risks and their associated controls will be tracked and monitored throughout the lifecycle of the system or change d) Strategy for validation and verification of the proposed change or introduction • Means that will be used to obtain measurable data to monitor the effectiveness of the control o Who will be responsible for reporting, collecting, and analyzing the data o How the data will be analyzed • Means that will be used to determine if adjoining systems are adversely affected o Who will be responsible for reporting, collecting, and analyzing the data o How will the data be analyzed • What will determine that safety requirements (existing and recommended) are met and satisfied • Future plans for updating the present SRMD 2) The SRM Panel documents any change that could have safety consequences in the provision of safety-related services. The scale of an SRMD varies depending on the type and complexity of a proposed system change.
3) The level (i.e., national, regional, or local) at which SRM is initiated may vary by organization or change proponent. If the change is at the regional or local levels, two methods for documenting SRM can be used:
• Address the change in a system-wide SRMD through site-specific parameter ranges • Develop and append a local-level SRMD to the larger, system-wide SRMD 4) While panels strive to reach consensus, there may be instances in which not all panel members agree on the results of the safety analysis. In that case, the results are documented, ensuring that the opinions of dissenters are also captured and delivered to the decision-maker.
5) The SRMD should be written so that it can be understood by a reviewer familiar with the discipline(s) relevant to the change (e.g., ATC controller, chief pilot, chief inspector, manager, chief dispatcher). There should be enough detail that a reviewer unfamiliar with the program, project, or organization can understand the change and the system within which it is contained.
The SRMD should include thorough descriptions of the identified hazards and provide rationales for the panel’s severity and likelihood assessments for each hazard. Using the SRMD Review Checklist for quality control will minimize delays caused by clarifications requested by SRMD reviewers and approvers. Furthermore, the originating facility/organization assigns SRM documentation numbering when drafting the document. Not all qualifiers will apply to every change; the facility/organization uses each type only when applicable.
C. Additional Resources for SRMD Development. In many instances, existing safety and system engineering processes produce documents that SRM Panels use to support the analysis portion of an SRMD. D. SRMD Benefits. An SRMD provides a standardized approach to developing a safety case that:
• Reduces omissions and inconsistencies in safety analysis preparation and conduct • Eases documentation development • Makes the sharing of safety risk data more manageable • Strengthens SRM skills • Encourages a safety culture • Ensures operational safety data are monitored to reduce hazards • Provides assurance to decision-makers that SMS processes are being followed • Establishes responsibility/accountability • Makes the process repeatable and reduces re-study of similar change proposals E. Difference Between Risk Acceptance and SRMD Approval. Approving the SRMD means that the approving party agrees that the analysis accurately reflects the safety risk associated with the change, the underlying assumptions are correct, and the findings are complete and accurate.
1) Accepting the safety risk is a certification by the appropriate manager that he/she understands the safety risk associated with the change and he/she accepts that safety risk. 2) Both approving the SRMD and accepting the safety risk are necessary, along with other inputs (e.g., costs, benefits), before implementing a change in a major system.
2.6.1.25. SRMD APPROVALS.p.227
A. SRMD Approval Level Requirements. SRMD approvals depend on the span of the program, its associated risk(s), the mitigation(s) used to control the risk, and other regional-specific guidance. “SRM Documentation Approval” is certification that the documentation was developed properly, hazards were systematically identified, risk was appropriately assigned, suitable mitigations were proposed, and a sound implementation and monitoring plan was prepared. SRMD approval does not constitute acceptance of the risk associated with the change or approval to implement the change.
1) The approval and review of an SRMD follows a process for establishing and maintaining quality assurance for the review and evaluation of SRM documentation. The SRM Panel should involve the approving authority early in the SRM process to obtain agreement on the assumptions and processes that it will use. The level of approval required for an SRMD will be based on the nature of the change and the risk identified.
NOTE: The approval of the SRMD that is described here is an activity that takes place with the Aviation Organization’s management system and it DOES NOT, NOR SHOULD IT involve GACA personnel. B. Post SRMD Approval. The change proponent should retain a copy of the SRMD for the lifecycle of the system or change. Upon request, the proponent of the change should provide their management with copies of SRMDs. SRMDs may also serve as inputs to existing approval processes.
C. SRMDs Related to Changes Not Approved or Implemented. The SRMD should be kept on file even if it is not approved or if the change is not implemented. Employees can use this information in assessing similar change proposals or as inputs to SRMDs for other change proposals. SRMDs that are not approved, or those used by a decision-maker in his/her decision not to implement a change, also provide proof that the SMS is performing its intended function (i.e., reducing the safety risk in the civil aviation environment). Relevant oversight entities may also audit this documentation.
D. SRMD Lifecycle. The results of safety analyses are a part of the system baseline information. Company employees may need to update or change an SRMD as a project progresses and as they modify decisions. Safety monitoring may indicate that the controls are less effective than originally expected or that additional hazards exist, which may require additional mitigations. Any change that may affect the assumptions or hazards identified in the SRMD or the estimated risk necessitates an amendment to the SRMD.
1) In addition, the SRMD includes a monitoring plan to conduct post-implementation assessments to verify the results of the previous analyses and update the SRMD. While necessary for the life of the system or change, the periodicity of these assessments may vary depending on the type, potential safety impact, and/or complexity of the change, as well as the depth and breadth of the original analysis.
2) When developing the plans to monitor the change and update the SRMD, existing support mechanisms should be taken into account. Based on the results of audits and evaluations of how the system performs, an organization may need to modify the SRMD, which could include reopening the safety analysis for additional assessment. The Safety Assurance portion of an SMS should further describe these processes.
2.6.1.27. ACCEPTING RISK.p.228
A. Effect of SRM on Safety Levels. Through SRM, decision-makers knowingly accept risk into the KCAS and thus are better able to manage it; this leads to increased safety. Understanding the consequences of risk increases the ability to anticipate and control the impacts of internal and/or external events on a program.
B. Accepting Safety Risk. Risk Acceptance is the certification by the appropriate management official that he/she understands the safety risk associated with the change, the mitigations are feasible and will be implemented, and he/she accepts that safety risk into the civil aviation environment.
1) Accepting the safety risk is a prerequisite to making a proposed change. Risk acceptance is based on predicted residual risk. Accepting the safety risk is different from approving an SRMD. 2) Approving an SRMD indicates that the analysis accurately reflects the safety risk associated with the change, the underlying assumptions are correct, and the findings are complete and accurate.
C. Authority to Accept Safety Risk. The acceptance of the safety risk depends on the span of the program or change, its associated risk, and the mitigation used to control the risk. Only those responsible for the change and in a position to manage the risk can accept the risk into the civil aviation environment.
1) Changes that have high, medium, or low initial safety risk, some of which have been mitigated to lower levels will need to managed by an appropriately high level of management.
2.6.1.29. TRACKING CHANGES.p.229
A. Change Tracking. In addition to the SRMDM and SRMD, each department within an organization should maintain a tracking matrix containing proposed system changes within its purview and the related outcome.
B. Change Tracking Matrix Responsibilities. Safety personnel review and analyze the data provided in the sample Change Tracking Matrix, and when appropriate, provides feedback to the organizations concerning their use of SRM. This analysis assists in identifying the scope of the SRM effort, as well as identifying the resources required to conduct SRM. Safety personnel then share the information with upper management; this information helps upper management identify the scope of its oversight effort and provides insight into the processes used by the organization to improve the safety of the civil aviation environment. In addition, each department is responsible for maintaining its own Change Tracking Matrix and providing monthly updates to Safety personnel.
C. Before Implementing a System Change. In addition to SRM, upper management verifies that a new or modified system (hardware and software) is ready for use in the operational environment for which it is intended. Specifically, the team responsible for the system conducts test and evaluation before implementing a system or a change to the system. It determines the method of verification based on the nature of the change. Through verification, the team shows that the system meets its requirements and performs its intended function(s).
1) Methods of verification include test, analysis, examination, and demonstration/evaluation. In addition to verification by the implementing department, Safety personnel conduct an independent assessment of operational readiness on designated systems prior to the in-service management phase.
D. SRM Resources. Each relevant department should have a designated Safety Manager who can provide additional guidance regarding the SMS and SRM. In addition, each relevant department should have a Safety Engineer who provides SRM expertise. Both the Safety Manager and Safety Engineer should also be available to provide input to the management personnel who will accept the risk associated with the change. In addition, if risk is to be accepted outside the department, the Safety Manager and/or Safety Engineer help facilitate that coordination.
Figure 2.6.1.9. Glossary
GLOSSARYp.230
ATC Air Traffic Control ATCT Air Traffic Control Tower CSA Comparative Safety Assessment ETBA Energy Trace and Barrier Analysis FHA Fault Hazard Analysis FMEA Failure Mode and Effect Analysis FMECA Failure Modes, Effects, and Criticality Analysis FTA Fault Tree Analysis GACA HAZOP Hazard and Operability Tool HEA Human Error Analysis HRH High Risk Hazard HTS Hazard Tracking System JSA Job Safety Analysis JTA Job Task Analyses MORT Management Oversight and Risk Tree OSA Operational Safety Assessment PHA Preliminary Hazard Analysis PHL Preliminary Hazard List SMS Safety Management System SRMDM SRM Decision Memo
GLOSSARYp.231
SRM Safety Risk Management SRMD Safety Risk Management Document SSAR System Safety Assessment Report
Section 2. State Safety Risk Registerp.232
2.6.2.1. Purpose. Develop and maintain a register of KSA State Aviation Safety Risks to enable GACA for effective implementation of the State Safety Program. This section describes the process that aims to identify, mitigate, and manage aviation safety risks. In other words, the section describes the safety data collection, data analysis, and presentation of safety so as to facilitate GACA and other stakeholders to meaningfully compare the State Safety performance with the set safety goals. The process involves:
(a) Systematic collection of safety data from all Certified entities and other sources. (b) Validation and consolidation of the safety data for the sector and industry level analysis. (c) Data Analysis and Presentations of safety information.
(d) Coordination with the industry and department. (e) Communication of safety information for necessary actions. (f) Archiving safety information in the State Safety Risk Register in a standard format.
This section is to support a systematic process-driven approach for data collection, analysis and recording for effective implementation of SSP. 2.6.2.2 Stakeholders. The key stakeholders of the State Safety Risk Register process are GACA President, the Executive Vice President Aviation Safety and Environmental Sustainability (AVSES), AIB, Certificating Departments within the AVSES sector, the Safety Risk Management (SRM) department and certificated entities.
2.6.2.3 State Safety Risk Register Process. The State Safety Risk Register process follows the flow chart as depicted in Figure X.1. The process involves data collection and consolidation, data analysis, and safety information presentation for objective comparison of actual safety performance with the set national safety goals. The process is aimed at having a system approach for data collection, analysis, recording, and coordinating with various stakeholders for enhancing safety performance with appropriate action such as additional or need-based surveillance, introducing safety policy or regulatory changes based on the outcome of safety performance analysis of actual and targeted performance.
2.6.2.3.1 Data Collection and Analysis According to GACAR Part 4, certified entities are obligated to provide the relevant safety and performance data in a stipulated format. The SRM Department in coordination with other Departments is required to collect all mandatory, voluntary and confidential reports, aircraft operational and statistical reports, and air traffic information on a timely basis.
The GACA processes of data collection include at least the following sources: (1) Surveillance Reports (a) Surveillance data of certificate holders. (b) Certificate holder's internal audit reports. (c) Unannounced/spot inspection reports.
(d) Aircraft Inspection reports. (2) Occurrence Reports (a) Accident and Serious Investigation Reports. (b) Other Incidents investigation report. (c) SDR. (d) Voluntary and Confidential safety reports.
(3) Periodic/Statistical/Engineering/Traffic Reports (a) Periodic safety reports (Monthly, half-yearly, quarterly, annual report) as required for certificated entities. Note: routine FDR reports and incident-related ATC recorder reports are included in the periodic operational reports of aircraft operators and ATS providers.
(b) Safety data from external sources/data feeds. The quality of safety data is to ensure that the data is fit for the purpose rather than perfect data. The quality of data involves the following criteria:
(a) Relevance. (b) Timeliness. (c) Accuracy and correctness. Each department needs to assess the relevance of data based on its needs and activities. The relevance of data is the degree to which it meets the department's needs and whether the available data sheds light on the safety issues in the area. Timeliness will influence relevance as timeliness refers to the time delay between the date/time of occurrence (e.g., FDR data) and the data available for analysis. This will have an impact on safety action. Old or outdated data may not be useful. Therefore, timely reports can yield quality data. Accuracy and correctness refer to whether the data values are correct values. The most common cause of inaccuracy is the input error at the sources or at the GACA data entry points and data decay due to transfer from the source operator database to the GACA database through data feeding. Keeping in view data accuracy, relevance, and timeliness, each department should focus on training of personnel and submitting of reports on time.
The Certificating Department (s) are required to validate the data for each operator and analyze the data for each operator and analyse the consolidated data for the entire sector/industry. For each department, the report consists of an analysis report of an individual organization and a consolidated report for all organizations in the specific area focusing on identifying safety risks.
An expert team in each department will perform data analysis. External or industry experts may also be involved in the analysis of safety data. The SRM Department will coordinate with all departments to organize the recording of safety information in a standardized format of the State Safety Risk Register.
The updated State safety register along with the safety data analysis reports and safety action reports are to be submitted to the EVP once in six months for safety progress monitoring and necessary changes in the Surveillance program, if necessary for change policy or regulations, and procedures based on the safety performance.
2.6.2.3.1.1 Safety Data Analysis (SDA) Data Analysis is the process of applying statistical or other analytical techniques to check, describe, condense, evaluate, and visualize data. The use of suitable tools for the analysis of data provides a more accurate understanding of the safety situations by examining the data in ways that reveal the existing relationships, connections, patterns, and trends.
The data analysis should focus on the following aspects: (a) Whether the certificated entities, process and system improving. (b) Factors that cause change. (c) Connections and corrections between or among various factors.
(d) Whether the assumptions are valid or not. (e) The degree to which the obtained results can be trusted. The analysis reports are such it allow departments to compare information and help draw conclusions from the data.
The analytical tools used in this case may be descriptive statistical data analysis, inferential/inductive data analysis, or predictive data analysis depending on the types of data collection. An expert/team should be involved in analyzing the aviation safety data.
Information visualization and reporting is another important task to be followed after analysis. It is important to identify suitable tools for the visualization and reporting of safety information after adata analysis.
Through the safety data analysis (SDA) aspects of the State safety program (SSP), as per the SSP Implementation Assessment (SSPIA), GACA will have the ability to use the hazard identification and safety risk management process as a source of safety intelligence to identify hazards and safety deficiencies and determine national operational safety risks and organizational challenges for inclusion in the NASP. The SSP provides safety information to the NASP. The SSP allows to manage its aviation activities in a coherent and proactive manner and measure the safety performance of the civil aviation system.
2.6.2.3.2 Risk Management Based on the Certificating Department(s) inputs, SRM will carry out further analysis to determine the credibility of state safety risks in the respective area of operation. When a credible risk is identified, the SRM Department shall provide the relevant Certificating Department(s) with a "Safety Risk Report [1]" in a timely manner. The concerned Certificating Department/s shall carry out an in-depth Risk Assessment on the probable risk/s identified in the State Information Paper.
The Certificating Department(s) are expected to complete the Risk Assessment in accordance with the time limit established by the SRM department and with the support of the relevant certificated entity. Once a Risk Assessment has been completed, the certifying department will share its assessment with the SRM Department for review. SRM Department will evaluate the Risk Assessment and the proposed mitigations and provide the submitting Certificating Department with acceptance of the proposed mitigations or instructions for further analysis. If the Risk Assessment is not accepted by the SRM Department, the SRM team should provide the necessary support to finalize the Risk Assessment with an outcome acceptable to the SRM Department and the Certificating Department.
The SRM Department will document the Risk Assessment, along with the related risks and mitigations in the State Safety Risk Register. The whole procedure consists of continuous cycles to identify and mitigate potential threats to the safety of the State in accordance with the SRM Department's routine receipt of analysis reports from the Certificating Departments.
NOTE [1]: Safety Risk Report is a technique of notifying the relevant certification department of a credible safety risk that must be mitigated. This paper will request that the certification department, in collaboration with the relevant service provider, conduct a risk management process that captures contributing factors, consequences, preventative and recovery mitigations, pre and post mitigations risk class, risk description, threats, and interval to conduce safety assurance over the effectiveness of risk mitigations.
2.6.2.3.3 Risk-Based Surveillance The Certificating Department(s) Risk Assessment will allow GACA to perform a safety risk-based surveillance (SRBS) enabling prioritization and allocation of a State's safety management resources commensurate with the safety risk profile of each sector or individual service provider.
Risk-based Surveillance will facilitate building risk-based inspection schedules for operators. The results should be used solely to determine the minimum number of inspections and should not be used, either directly or indirectly, to determine the required number of safety inspectors.
2.6.2.4 Safety Assurance As a direct outcome of documenting state safety risks, SRM Department shall ensure that the identified risks’ mitigations remain effectively enforced. SRM will, periodically, issue State Information Paper to the Certificating Department(s) to carry out assurance operations of mitigations as part of their surveillance activities. Once the surveillance activity is completed by the Certificating Department, SRM shall be notified of the effectiveness of the referenced mitigations.
Figure: 2.6.2.1 State Safety Risk Register Appendix 1: Safety Risk Report (SRR) Safety Risk Report (SRR) Objective This paper will request that the certification department, in collaboration with the relevant service provider, conduct a risk management process that captures contributing factors, consequences, preventative and recovery mitigations, pre and post mitigations risk class, risk description, threats, and interval to conduce safety assurance over the effectiveness of risk mitigations.
The aim of this Information paper is to provide the certification departments within Aviation Safety and Environmental Sustainability sector with clear guidelines on the Safety Information required to develop the State Risk Register.
Section 3. State Safety Risk Profilep.240
2.6.3.1. Purpose. The Kingdom of Saudi Arabia’s (KSA) State Safety Program (SSP) is built on dynamic safety risk management foundations and ensures the General Authority of Civil Aviation (GACA) has developed a number of safety risk profiles, including but not limited to organizational safety risk profiles, sector safety risk profiles, an industry safety risk profile, and a system safety risk profile. These profiles are all tools in evaluating the perceived highest risk associated with the overall operation of the organization, area, sector, industry, and system.
The purpose of developing a Safety Risk Profile is to consistently, efficiently, and systemically manage the Kingdom’s aviation safety system; the below are the main objectives of the SRP process: • Efficiently manage the aviation industry and its related sectors and organizations, as “what is not measured, is not managed”.
• Transform the State Safety Management System in the Kingdom of Saudi Arabia to a risk-based system and adopting a risk-based surveillance (SRBS) approach. • Be one of the main identification tools for the preventive safety management system.
2.6.3.2. Stakeholders. The key stakeholders involved in the Safety Report Governance process are the President, Executive Vice President (EVP), the respective Technical Departments (TDs) within Aviation Safety and Environmental Sustainability sector, the Safety Risk Management (SRM) Department, the Economic Licensing General Department, and all certificate holders.
2.6.3.3. Safety Risk Profile (SRP) Framework. The Safety Risk Profile (SRP) tool utilizes all factors identified in the ICAO Safety Management Manual (Doc 9859) but are reorganized and redistributed to suit the needs and local conditions of the General Authority of Civil Aviation and the Saudi aviation industry.
The SRP framework, depicted in Figure 2.6.3.1, is made up of 10 weighted indices, each index is made up of several sub-indices (i.e., factors), and every factor has a set of defined descriptors (i.e., risk measures) to ensure a subjective unbiased analysis of each of the factors and indices.
The 10 indices are made up of three organizational indices [OI] and seven safety indices [SI]; these indices are: 1. Financial Health [OI] 2. Complexity of Organization [OI], including Years of Operation, Exposure to Safety Risk, and Organizational Scope of Activities 3. Key Safety Personnel [OI] 4. Safety Management System [SI] 5. Safety Performance [SI] 6. Previous Audit Results & Resolution [SI] 7. Hazard Identification and Risk Assessment Maturity [SI] 8. Reporting Maturity [SI] 9. Previous Occurrences [SI] 10. Safety Promotion [SI] The specific calculation method, weights, and assessment checklists utilized by the SRP framework are contained internally within the documented GACA processes.
Figure 2.6.3.1: Safety Risk Profile Framework 2.6.3.4. Safety Risk Profile (SRP) Process. The Safety Risk Profile process follows the flow chart as depicted in Figure 2.6.3.5. The process is split between three main categories: the Industry Organizations (or certificate holders), the SRM Department, and the respective Technical Departments (TDs).
2.6.3.4.1. Safety Risk Profile Initiation. Every certificate holder under the relevant GACARs and as applicable to GACAR Part 5 shall have a dedicated Safety Risk Profile (SRP). All new certificate holders will require an SRP be established as per Figure 2.6.3.5.
All existing certificate holders will have their SRP created during the SRP implementation phase by the end of 2024. 2.6.3.4.2. Data and Evidence Collection. On an annual basis, one of the assigned inspectors within the respective Technical Department (TD) will contact the certificate holder and request the relevant information to complete the SRP assessment. The information requested may include financial [1], safety, operational, organizational [2], and any other data required to complete the Safety Risk Profile Assessment Checklist for that relevant service provider. All obtained evidence will be subject to GACA’s data privacy provisions and will be dealt with as confidential information. The certificate holder must furnish all required data and information within 10 workings days of receiving the request. If the certificate holder does not comply with providing the relevant information, the inspector shall request the data again.
If the inspector realizes that the certificate holder will not provide the required information or is not complying with the directives, the inspector shall escalate the matter to their General Manager (GM). The Technical Department (TD) GM will attempt to alleviate the situation, and if not resolved, the issue will be escalated to the EVP.
The assigned inspector will then utilise the Safety Risk Profile Assessment Checklist and the evidence collected from the certificate holder to select the relevant risk measure for every checklist index and parameters/factors. The checklist, once completed, will provide the inspector with the calculated Safety Risk Index for that relevant time period. The inspector must ensure that the SRP Assessment Checklist is completed within 6 weeks of the end of the year being analyzed.
NOTE [1]: The SRP Annual Financial Health Reporting Form (Form Number: GACA_AVSES_SRM_F-018) shall be used. 2.6.3.4.4. Trend Analysis. Once the Safety Risk Index has been calculated and determined for the certificate holder, and the index has been shared with the SRM General Department, a two-level analysis will occur.
The first level analysis will be completed by the respective Technical Department and will compare the performance of that certificate holder to its own performance across the previous time periods. Additionally, the respective Technical Department will compare the performance of that certificate holder against other certificate holders within the same area.
The second level analysis will be completed by the SRM General Department and will compare the collated performance of all certificate holders within each area and determine the Risk Index of every area and the Risk Index of the industry in general.
At any point during the analysis, and by either of the analysis departments, a determination of whether or not an intervention is required based on the safety performance of the certificate holder. If no intervention is required, the respective Technical Department will publish the SRP internally. If an intervention is required by any department, the intervention requestor will escalate to the EVP and include the relevant issue, the proposed solution, and the proposed action plan.
2.6.3.4.5. Annual SRP Cycle. The above process will be repeated annually for every certificate holder as per an internally published schedule. 2.6.3.4.6. Safety Risk Profile Process Assurance. The SRM General Department shall conduct assurance over the respective Technical Departments’ Safety Risk Profile Process to ensure compliance of all concerned stakeholders. The assurance activities will be part of the annual State Safety Assurance Program and/or conducted as an ad hoc audit/inspection as required.
Any findings will be reported to the SRM General Manager and the EVP for Aviation Safety and Environmental Sustainability. Closure of the audit findings will require a Root Cause Analysis for each finding and associated with an approved Action Plan.
2.6.3.5 Visibility of the Safety Risk Profile (SRP). All organizational Safety Risk Profiles are published internally within the GACA systems and documentation and are kept confidential. Each certificate holder has full access to the summary of its own Safety Risk Profile and can request to view the summary profile at any time in writing through the respective Technical Department.
The Summary SRP will include the summary scores of the 10 indices over a certain time period (see Figure 2.6.3.2), a graph showing the Safety Risk index trend over a certain period of time (see Figure 2.6.3.3), and the major comments as provided by the respective Technical Department. In some instances, the respective Technical Department may share specific information and analysis regarding a single time period (i.e., annual) alongside some comment (see Figure 2.6.3.4).
Figure 2.6.3.2: Example Summary SRP Scores Table (over multiple time periods) Figure 2.6.3.3: Example Summary SRP Index Trend Figure (over multiple time periods) (a) (b) Figure 2.6.3.4: Example Summary SRP Scores Table (a) and Index Figure (b) over a single time period 2.6.3.5. Continuous Improvement.
The Safety Risk Profile process is managed and controlled by the Safety and Risk Management General Department and undergoes regular and routine review. The process will be reviewed by the SRM General Department and feedback from the respective Technical Departments and other stakeholders will be taken into consideration, including the SSP Working Groups. Any required ad hoc changes to the framework and/or process will be reviewed, amended, approved, and published as soon as reasonably practicable.
Figure 2.6.3.5: Safety Risk Profile Process Appendix 1: SRP Annual Financial Health Reporting Form The referred form, the SRP Annual Financial Health Reporting (GACA-AVSES-SRM-F018), can be downloaded from GACA's website under the relevant forms.
URL: https://www.gaca.gov.sa/rules-and-regulations-category/aviation-safety-and-environmental- sustainability/forms/srm/gaca-avses-srm-f018---srp-annually-financial-health-reporting-form
Section 4. Guidance for Aviation Occurrence Categoriesp.249
2.6.4.1. Purpose. GACA utilizes an Electronic Reporting System for the submission of Mandatory occurrence reports (MOR), Voluntary occurrence reports (VOR) and confidential reports. The purpose of this part is to explain the differences between each report type, subsequent categories and subcategories classifications using common taxonomies and definitions, which are intended to improve the aviation community’s capacity to focus on common safety issues.
This Taxonomy is developed in alignment with the International Civil Aviation Organization (ICAO) and the Commercial Aviation Safety Team (CAST); the CAST/ICAO Common Taxonomy and Definitions. Note 1 — Information referenced in this document are for explanatory purpose.
The following Occurrence Types can be raised through GACA electronic reporting system: A. Accidents An occurrence associated with the operation of an aircraft which, in the case of a manned aircraft, takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, or in the case of an unmanned aircraft, takes place between the time the aircraft is ready to move with the purpose of flight until such time as it comes to rest at the end of the flight and the primary propulsion system is shut down, in which:
1) a person is fatally or seriously injured. 2) the aircraft sustains damage or structural failure which adversely affects the structural strength, performance or flight characteristics of the aircraft, and would normally require major repair or replacement of the affected component.
3) the aircraft is missing or is completely inaccessible. Note 2 — Further details on the definition of accidents, serious incidents and incidents can be found in GACAR Part 1. B. Serious Incidents An incident involving circumstances indicating that there was a high probability of an accident and associated with the operation of an aircraft which, in the case of a manned aircraft, takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, or in the case of an unmanned aircraft, takes place between the time the aircraft is ready to move with the purpose of flight until such time as it comes to rest at the end of the flight and the primary propulsion system is shut down.
Note 3 — The difference between an accident and a serious incident lies only in the result. Note 4 — Examples of serious incidents can be found in GACAR Part 4 APPENDIX A – EXAMPLES OF SERIOUS INCIDENTS and the ICAO Annex 13, Attachment C.
C. Incidents An occurrence, other than an accident, associated with the operation of an aircraft which affects or could affect the safety of operation. D. Hazards A condition or an object with the potential to cause or contribute to an aircraft incident or accident.
E. Non-Aircraft Related Used for Equipment Collisions or Violations that did not involve an aircraft F. Voluntary Occurrence Used for reporting of occurrences not covered under any of the above. Note 5 — For confidential reporting through GACA Electronic Reporting System, Any person can use the ‘Anonymous Login’ option from GACA Electronic Reporting System main login page to report.
2.6.4.2. Occurrence Categories. The following GACA reporting taxonomies are based on the International Civil Aviation Organization (ICAO) and the Commercial Aviation Safety Team (CAST) common taxonomies and definitions for aviation accident and incident reporting.
Note 6 — In case multiple categories are associated with a single occurrence, the highest severity category should be selected. Note 7 — The term runway or landing area is taken in its broadest sense and includes runways, landing strips, waterways, unimproved landing areas, and landing pads (which may include offshore platforms, building roofs, roads, ships, and fields), or other landing areas.
ABNORMAL RUNWAY CONTACT (ARC)p.251
Any landing or takeoff involving abnormal runway or landing surface contact. Includes: Hard/heavy landings (touchdowns on the runway with more force than a normal landing). Long landings (touchdowns further along the runway than intended).
Fast landings (touchdowns on the runway at higher than normal speed). Off center landings (touchdowns on the runway deviating from the centerline toward one side of runway). Crabbed landings (aircraft flown with its nose pointed into wind to compensate for the crosswind).
Nose wheel first touchdown. Tail strikes (any tail strike during take-off or landing) Wingtip/nacelle (exterior mount housing) strikes during take-off or landing. Gear-up landings (use for system/component failure or malfunction occurrences, which led to the gear up landing).
Abnormal Runway Contact (ARC) subcategories in GACA Electronic Reporting System: - Crabbed Landing - Fast Landing - Gear-up Landing - Hard Landing - Heavy Landing - Long Landing - Nacelle Strike - Nose Wheel First Touchdown - Off Center Landing - Tail Strike - Wingtip Strike
ABRUPT MANEUVER (AMAN)p.252
The intentional abrupt (sudden and unexpected) maneuvering of the aircraft by the flight crew. Includes: The intentional abrupt maneuvering of the aircraft to avoid a collision with terrain, objects/obstacles, weather or other aircraft.
Abrupt maneuvering on ground; examples include hard braking maneuver, rapid change of direction to avoid collisions, etc. Intentional deviations resulting from a PIC exercising emergency authority. Abrupt Maneuver (AMAN) subcategories in GACA Electronic Reporting System:
- Avoid Collision with Obstacles - Avoid Collision with Terrain - Avoid Collision with vehicle - Avoid Other Aircraft - Avoid Weather - Hard Braking maneuver - Rapid change in direction (on ground)
AERODROME (ADRM)p.253
Occurrences involving Aerodrome design, service, or functionality issues. Occurrences do not necessarily involve an aircraft. Includes: Deficiencies/issues associated with State-approved Aerodromes and Heliports, including:
- Runways and Taxiways - Buildings and structures - Crash/Fire/Rescue (CFR) services - Obstacles on the Aerodrome property - Lighting, markings, and signage Deficiencies with snow, frost, or ice removal from aerodrome surfaces Closed runways, improperly marked runways, construction interference, etc.
Effects of Aerodrome Design Loose foreign objects on aerodromes and heliports Failures of glider winch launch equipment. Aerodrome (ADRM) subcategories in GACA Electronic Reporting System: - Aerodrome Design Deficiencies - Closed Runway - Construction Interference - Crash/Fire/Rescue Services - Foreign object debris (FOD) - Improper runway markings - Lighting Failure - Signage Limitation - Obstacles on Aerodrome Property
COLLISIONS (MAC)p.254
Air proximity issues, Traffic Collision Avoidance System (TCAS)/Airborne Collision Avoidance System (ACAS) alerts, loss of separation as well as near collisions or collisions between aircraft in flight.
Includes: All collisions between aircraft while both aircraft are airborne. Separation-related occurrences caused by either air traffic control or cockpit crew. AIRPROX reports Genuine TCAS/ACAS alerts.
Deviations from assigned altitude or course to avoid other aircraft as a result of visual detection or complying with a TCAS RA. AIRPROX/TCAS Alert/Loss Of Separation/Near Midair Collisions/Midair Collisions (MAC) subcategories in GACA Electronic Reporting System:
- AIRPROXp.254
- Collision between two Aircrafts - Separation Occurrence
ATM/CNS (ATM)p.254
Occurrences involving Air Traffic Management (ATM) or Communication, Navigation, Surveillance (CNS) service issues. Occurrences do not necessarily involve an aircraft. Includes: Air Traffic Control (ATC) facility/personnel failure/degradation.
CNS service failure/degradation, procedures, policies, and standards. NAVAID outage, NAVAID service error, controller error, supervisor error, ATC computer failure, radar failure, and navigation satellite failure.
All of the facilities, equipment, personnel, and procedures involved in the provision of State-approved Air Traffic Services. ATM/CNS (ATM) subcategories in GACA Electronic Reporting System: - ATC CWP Failure - ATC Miscommunication - ATC Personnel Incapacitation - ATCO Error - CNS Degradation - CNS Failure - Frequency Interference
BIRD (BIRD)p.255
Occurrences involving collisions/near collisions with bird(s) only. (May occur in any phase of flight). BIRD (BIRD) has no subcategories in GACA Electronic Reporting System.
CABIN SAFETY EVENTS (CABIN)p.255
Miscellaneous occurrences in the passenger cabin of transport category aircraft. Includes: Events related to carry-on baggage, supplemental oxygen, or missing/non-operational cabin emergency equipment.
Unintended deployment of emergency equipment. Injuries of persons while in the passenger cabin of an aircraft. Unintended deployment of emergency equipment. Cabin Safety Events (CABIN) subcategories in GACA Electronic Reporting System:
- Carry-on Baggage - Missing Emergency Equipment - Non-Operational Emergency Equipment - Slide/Raft Deployment - Supplemental Oxygen
COLLISION WITH OBSTACLE(S) DURING TAKEOFF AND LANDING (CTOL)p.255
Collision with obstacle(s) during takeoff or landing while airborne. For all aircraft (excluding rotorcraft), to be used only in cases in which the crew was aware of the true location of the obstacle, but its clearance from the aircraft flightpath was inadequate.
Note 8 — See CFIT if n cases in which the crew was NOT aware of the true location of the obstacle during takeoff or landing while airborne. Includes: Contact with obstacles, such as vegetation, trees and walls, snowdrifts, power cables, telegraph wires and antennae, offshore platforms, maritime vessels and structures, land structures and buildings.
Collisions during takeoff to and landing from the hover. Water obstacles during takeoff from water (e.g., waves, dead-heads, ships, swimmers). Collisions with obstacles during takeoff and landing, such as trees or walls.
Collision with Obstacle(S) During Takeoff and Landing (CTOL) subcategories in GACA Electronic Reporting System: - Collision with obstacle during Landing - Collision with obstacle during Take-off
CONTROLLED FLIGHT INTO OR TOWARD TERRAIN (CFIT)p.256
In-flight collision or near collision with terrain, water, or obstacle without indication of loss of control. Use only for occurrences during airborne phases of flight. Can occur during either Instrument Meteorological Conditions (IMC) or Visual Meteorological Conditions (VMC).
Includes: Collisions with those objects extending above the surface (for example, towers, trees, power lines, cable car support, transport wires, power cables, telephone lines and aerial masts). Instances when the cockpit crew is affected by visual illusions or degraded visual environment (e.g., black hole approaches and helicopter operations in brownout or whiteout conditions) that result in the aircraft being flown under control into terrain, water, or obstacles.
Includes flying into terrain during transition into forward flight. Controlled Flight into or Toward Terrain (CFIT) subcategories in GACA Electronic Reporting System: - Inflight Collision with Terrain - Inflight Near Collision with Terrain
EVACUATION (EVAC)p.256
Occurrence in which either, a) a person(s) was/were injured during an evacuation, b) an unnecessary evacuation was performed (An unnecessary evacuation is one that was either incorrectly commanded or uncommented by the crew), c) evacuation equipment failed to perform as required, or d) the evacuation contributed to the severity of the occurrence.
Only used for passenger-carrying operations involving transport category aircraft. Includes: Cases in which an injury(ies) was (were) sustained during the evacuation through an emergency exit or main cabin door.
Cases in which the evacuation itself is the accident (in essence, had there not been an evacuation there would not have been an accident). Evacuation following a ditching or survivable crash landing in water provided one of the conditions above is met.
Evacuation (EVAC) subcategories in GACA Electronic Reporting System: - Evacuation Equipment Failure - Person Injured during Evacuation - Unnecessary Evacuation was Performed - Evacuation Contributed to Occurrence Severity
EXTERNAL LOAD RELATED OCCURRENCES (EXTL)p.257
Occurrences during or as a result of external load or external cargo operations. Includes: Cases in which external load or the load lifting equipment used (e.g., long line, cable) contacts terrain, water surface, or objects.
Cases in which the load or, in the absence of a load, the load lifting equipment strikes or becomes entangled with the main rotor, tail rotor, or the helicopter fuselage. Injuries to ground crew handling external loads as result of contact with/dropping/inadvertent release of external load.
Ground injuries to ground crew handling external loads due to the downwash effect or falling branch, tree, etc. External hoist, human external cargo, and long lines. External Load Related Occurrences (EXTL) subcategories in GACA Electronic Reporting System:
- Failure of Onboard External Lifting Equipment - Injury to Ground Crew Handling External Load - External Load or Load Lifting Equipment Contacts Terrain, Water Surface, Or Objects. - Load or Load Lifting Equipment Strikes or Becomes Entangled with Helicopter Parts
FIRE/SMOKE (NON-IMPACT) (F–NI)p.258
Fire or smoke in or on the aircraft, in flight, or on the ground, which is not the result of impact. Includes: Fire due to a combustive explosion from an accidental ignition source. Fire and smoke from system/component failures/malfunctions in the cockpit, passenger cabin, or cargo area.
Fire/Smoke (Non-Impact) (F–NI) subcategories in GACA Electronic Reporting System: - Cabin Fire or Smoke - Cargo Area Fire or Smoke - Cockpit Fire or Smoke - Fire or Smoke on External Part of Aircraft
FIRE/SMOKE (POST-IMPACT) (F–POST)p.258
Fire/Smoke resulting from impact. This category is only used for occurrences in which post impact fire was a factor in the outcome. Fire/Smoke (Post-Impact) (F–POST) has no subcategories in GACA Electronic Reporting System.
FUEL RELATED (FUEL)p.258
One or more powerplants experienced reduced or no power output due to fuel exhaustion, fuel starvation/mismanagement, fuel contamination/wrong fuel, or carburetor and/or induction icing. The following fuel-related definitions are provided for clarity:
- Exhaustion: No usable fuel remains on the aircraft. - Starvation/mismanagement: Usable fuel remains on the aircraft, but it is not available to the engines. - Contamination: Any foreign substance (for example, water, oil, ice, dirt, sand, bugs) in the correct type of fuel for the given powerplant(s).
- Wrong fuel: Fuel supplied to the powerplant(s) is incorrect, for example, Jet A into a piston powerplant, 80 octane into a powerplant requiring 100 octane. Includes: Flight crew or ground crew-induced fuel-related problems that are not the result of mechanical failures.
Cases in which there was a high risk of fuel exhaustion but there was no actual loss of power. Fuel exhaustion, fuel starvation/mismanagement, fuel contamination, wrong fuel, carburetor and induction icing Fuel Related (FUEL) subcategories in GACA Electronic Reporting System:
- Fuel Contamination - Fuel Exhaustion - Fuel Starvation - Wrong Fuel
GLIDER TOWING RELATED EVENTS (GTOW)p.259
Premature release, inadvertent release or non-release during towing, entangling with towing, cable, loss of control, or impact into towing aircraft/winch. Definition: A glider is a fixed-wing aircraft that is supported in flight by the dynamic reaction of the air against its lifting surfaces, and whose free flight does not depend on an engine.
Glider Towing Related Events (GTOW) has no subcategories in GACA Electronic Reporting System.
GPS RELATED (GPS)p.259
Occurrences related to Global Positioning System (GPS) or Global Navigation Satellite System (GNSS). Includes: GPS/GNSS spoofing, jamming and signal lost. GPS Related (GPS) has no subcategories in GACA Electronic Reporting System.
GROUND COLLISION (GCOL)p.259
Aircraft collisions with another aircraft, person, ground vehicle, obstacle, building, structure, etc., while on a surface other than the runway used for landing or intended for takeoff. Taxiing includes ground and air taxiing for rotorcraft on designated taxiways.
Collisions while the aircraft is moving under its own power in the gate, ramp, or tiedown area. Note 9 — Ground collisions resulting from events categorized under Runway Incursion (RI), Wildlife (WILD), or Ground Handling (RAMP) are excluded from this category.
Ground Collision (GCOL) subcategories in GACA Electronic Reporting System: - With Aircraft - With Building - With Vehicle - With Obstacle - With Person - With Structure
GROUND HANDLING (RAMP)p.260
Occurrences during (or as a result of) ground handling operations at aerodromes, heliports, helidecks, and unprepared operating sites. Includes: Occurrences that occur while servicing, boarding, loading, and deplaning the aircraft Occurrences involving boarding and disembarking while a helicopter is hovering Deficiencies or issues related to snow, frost, and/or ice removal from aircraft Injuries to people from propeller/main rotor/tail rotor/fan blade strikes.
Pushback/powerback/towing events Jet Blast and Prop/rotor downwash ground handling occurrences Aircraft external preflight configuration errors (e.g., improper loading and improperly secured doors and latches) that lead to subsequent events.
All parking areas (ramp, gate, tiedowns). Operations at aerodromes, heliports, helidecks, and unprepared operating sites. Collisions while the aircraft is moving during power back in the gate, ramp, or tiedown area.
Collisions during (or as a result of) ground handling operations at aerodromes, heliports, helidecks, and unprepared operating sites. Ground Handling (RAMP) subcategories in GACA Electronic Reporting System:
- Catering Loader - Fuel Spillage - Ground Equipment Impact - Impacted by Fuel Truck - Impacted by High Loader - Impacted By Medical Lift - Improper Loading - Injuries to People from Propeller/Main Rotor/Tail Rotor/Fan Blade Strikes - Improperly Secure of Doors and Latches - Jet Blast - Prop/Rotor Downwash - Pushback/Towing/Back Power
ICING (ICE)p.261
Accumulation of snow, ice, freezing rain, or frost on aircraft surfaces that adversely affects aircraft control or performance. Includes: Accumulations that occur in flight or on the ground (i.e., deicing-related).
Windscreen icing which restricts visibility is also covered here. Ice accumulation on sensors, antennae, and other external surfaces. Ice accumulation on external surfaces including those directly in front of the engine intakes.
Injuries sustained as a result of Icing events. Icing (ICE) subcategories in GACA Electronic Reporting System: - Ice Accumulation on Antenna - Ice Accumulation on Engine Intake - Ice Accumulation on External Surfaces - Ice Accumulation on Sensors - Inflight Ice Accumulation - On Ground Ice Accumulation - Windscreen Icing which Restrict Visibility
LOSS OF CONTROL–GROUND (LOC–G)p.261
Loss of aircraft control while the aircraft is on the ground. Used only for non-airborne phases of flight, i.e., ground/surface operations. Includes: Loss of control as a result of a contaminated runway or taxiway (e.g., rain, snow, ice, slush).
Loss of control during ground operations as the result of system/component failure or malfunction to the powerplant or non-powerplant Loss of control during ground operations as the result of evasive action taken during a Runway Incursion (RI) or Wildlife (WILD) encounter.
Rotorcraft during sloping ground or moving helideck operations, dynamic rollover and ground resonance events. Taxiway excursions due to a loss of control on the ground. Note 10 — If loss of control on ground resulted in a Runway Excursion code as (RE).
Loss of Control–Ground (LOC–G) subcategories in GACA Electronic Reporting System: - as the result of Contaminated Runway or Taxiway - as the result of System/Component Failure or Malfunction Powerplant - as the result of System/Component Failure or Malfunction Non-Powerplant - as the result of Evasive Action
LOSS OF CONTROL–INFLIGHT (LOC–I)p.262
Loss of aircraft control while, or deviation from intended flightpath, in flight. Loss of control inflight is an extreme manifestation of a deviation from intended flightpath. The phrase “loss of control” may cover only some of the cases during which an unintended deviation occurred.Used only for airborne phases of flight in which aircraft control was lost.
Includes: Loss of control during flight as a result of a deliberate maneuver (e.g., stall/spin practice). Occurrences involving configuring the aircraft (e.g., flaps, slats, onboard systems, etc.) as well as rotorcraft retreating blade stall.
Stalls are considered loss of control. Rotorcraft “Loss of Tail Rotor Effectiveness.” Loss of control during practice or emergency autorotation Pilot-induced or assisted oscillations Runway contact after takeoff due to losing control.
Lateral or vertical deviations resulting from extreme manifestations of loss of aircraft control in flight. If the control of the aircraft is lost (induced by crew, weather or equipment failure). Helicopter hard/heavy landings after an off-field emergency autorotation when there was no intention to land before the autorotation was entered.
Loss of Control–Inflight (LOC–I) subcategories in GACA Electronic Reporting System: - Improper Aircraft Configuration - Deliberate Maneuver - Pilot Oscillations - Rotorcraft Autorotation - Rotorcraft Vortex Ring - Rotorcraft Loss of Tail Rotor Effectiveness - Stall
LOSS OF LIFTING CONDITIONS EN ROUTE (LOLI)p.263
Landing enroute due to loss of lifting conditions. Applicable only to aircraft that rely on static lift to maintain or increase flight altitude, namely sailplanes, gliders, hang gliders and paragliders, balloons and airships.
Includes: All static lift forms to be considered, including atmospheric lift, namely from orographic, Thermal, mountain wave and convergence zone, and buoyancy lift namely from lighter than air gas or hot air.
Motor glider and paramotor aircraft if operating under static atmospheric lift conditions, and the engine could not be started. Off-field landing by gliders. Loss Of Lifting Conditions En Route (LOLI) subcategories in GACA Electronic Reporting System:
- Loss of Atmospheric Lift - Loss of Convergence Lift - Loss of Mountain Wave Lift - Loss of Orographic Lift - Loss of Thermal Lift
LOW ALTITUDE OPERATIONS (LALT)p.263
Collision or near collision with obstacles/objects/terrain while intentionally operating near the surface (excludes takeoff or landing phases). “Terrain” includes water, vegetation, rocks, and other natural elements lying on, or growing out of, the earth.
Includes: Ostentatious display, maneuvering at low height, aerobatics, sightseeing or demonstration flights. Aerial inspection, avalanche mining, human hoist or human cargo sling or search and rescue operations Aerial application, intentional helicopter operations close to obstacles during aerial work and scud running with airplanes (ducking under low visibility conditions).
Intentional maneuvering in close proximity to cliffs, mountains, into box canyons, and similar flights in which the aircraft aerodynamic capability is not sufficient to avoid impact. Low Altitude Operations (LALT) subcategories in GACA Electronic Reporting System:
- Collision with Object - Collision with Obstacle - Collision with Terrain - Intentional Maneuvering in Close Proximity to Cliff - Intentional Maneuvering in Close Proximity to Mountain - Near Collision with Terrain/Obstacle/Object
MEDICAL (MED)p.264
Occurrences involving illnesses of persons on board an aircraft. Includes: Crewmembers unable to perform duties due to illness. Medical emergencies due to illness involving any person on board an aircraft, including passengers and crew.
Illnesses or non-injury medical emergencies. Medical (MED) subcategories in GACA Electronic Reporting System: - Flight Crew Incapacitation - Cabin Crew Incapacitation - Passenger Death - Passenger Sickness
NAVIGATION ERRORS (NAV)p.264
Occurrences involving the incorrect navigation of aircraft on the ground or in the air. Includes: Lateral navigation errors caused by navigating using the improper navaid or improper programming of aircraft navigation systems, Airspace infringement/penetration resulting from improper navigation, uncertainty of position, improper planning, or failure to follow procedures prior to entering airspace, Failure to accurately track navigation signals (lateral or vertical), Altitude/level busts (see below for exceptions), Deviating from ATC/ATM clearances or published procedures (SID/DP, STAR, approach procedures, charted visual procedures), Failure to follow clearances or restrictions while operating on the surface of an aerodrome, including— - Taxiing or otherwise operating an aircraft on a restricted portion of an aerodrome (cargo ramp, air carrier ramp, general aviation ramp, military ramp, wingspan- or weight restricted taxiways or runways, etc.) - Take-offs, aborted take-offs, or landings on a taxiway, unassigned runway, or closed runway (see below for exceptions), - Approaches or landings to/on unassigned runways or to/at the wrong aerodrome.
Taxiway excursions (except following a loss of control on the ground or intentionally steering an aircraft off a taxiway to avoid a collision). Navigation Errors (NAV) subcategories in GACA Electronic Reporting System:
- Airspace Infringement/Penetration - Altitude/Level Busts - Approaches to/on Unassigned runways or to/at the Wrong Aerodrome - Deviating from ATC/ATM clearances or Published Procedures - Lateral or Vertical Navigation Errors - Take-offs, Aborted Take-offs, or Landings on a Taxiway, or Closed Runway - Taxiway Excursion
OTHER (OTHR)p.265
Any occurrence associated with the operation of an aircraft which affects or could affect the safety of operation not covered under another category. Other (OTHR) has no subcategories in GACA Electronic Reporting System.
RUNWAY EXCURSION (RE)p.265
A veer off or overrun off the runway surface during either the takeoff or landing phase only. Includes: Intentional or unintentional runway excursions. For example, the deliberate veer off to avoid a collision as result of a Runway Incursion.
All cases in which the aircraft left the runway/helipad/helideck regardless of whether the excursion was the consequence of another event. Runway Excursion (RE) subcategories in GACA Electronic Reporting System:
- Veer Off - Overrun
RUNWAY INCURSION (RI)p.266
Any occurrence at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the protected area of a surface designated for the landing and takeoff of aircraft. Includes: Runway incursions resulting from the improper navigation of an aircraft at an aerodrome.
Runway incursions resulting from an ATC/ATM error. Loss of separation with at least one aircraft on the ground. Runway incursion (RI) subcategories in GACA Electronic Reporting System: - With Aircraft - With Person - With Vehicle
SECURITY RELATED (SEC)p.266
Criminal/Security acts which result in accidents or incidents (per Annex 13 to the Convention on International Civil Aviation). Includes: Hijacking and/or aircraft theft, Interference with a crewmember (e.g., unruly passengers), Flight control interference, (e.g., laser attacks) Ramp/runway/taxiway security, (e.g., drone activities over aerodrome vicinity) Sabotage, Suicide, or Acts of war.
Injuries sustained as a result of Intentional acts (suicide, homicide, acts of violence, self-inflicted injury, or laser attacks). Security Related (SEC) subcategories in GACA Electronic Reporting System:
- Act of War - Aircraft Theft - Dangerous Goods - Flight Control Interference - Hi-Jacking - Interference with a Crew Member. - Laser Attack - Passenger Smoking - Ramp Security - Runway Security - Sabotage - Suicide - Taxiway Security - Threats - UAV / Drone - Unruly Passenger
SYSTEM/COMPONENT FAILURE OR MALFUNCTION (NON-POWERPLANT) (SCF–NP)p.267
Failure or malfunction of an aircraft system or component other than the powerplant. Includes: Rotorcraft main rotor and tail rotor system, drive system and flight control failures or malfunctions. Errors or failures in software and database systems.
Non-powerplant parts or pieces separating from an aircraft. For unmanned aircraft, includes failure or malfunction of ground-based, transmission, or aircraft-based communication systems or components or datalink systems or components.
Failures/malfunctions of ground-based launch or recovery systems equipment. Failures/malfunctions, including those related to or caused by maintenance issues. Occurrences in which the gear collapses during the takeoff run or the landing roll.
Rotorcraft main rotor and tail rotor system, drive system and flight control failures or malfunctions A mechanical failure which rendered the aircraft uncontrollable. GPS/GNSS issue as result of aircraft technical failure Non-combustive explosions such as tire burst and pressure bulkhead failures.
Not Genuine (false) TCAS/ACAS alerts which can occur due to: 1. Transponder Malfunctions: if an aircraft's transponder is not functioning correctly or is incorrectly configured, it could send misleading altitude or position information, triggering a TCAS/ACAS alert in another aircraft.
2. Incorrect Aircraft Identification: If TCAS/ACAS misidentifies the position or altitude of another aircraft due to issues with its own radar or signal processing, it could trigger a false alert. System/Component Failure or Malfunction (Non-Powerplant) (SCF–NP) subcategories in GACA Electronic Reporting System:
- Aileron - Flight Control System Malfunction - Air Conditioning System Malfunction - Communication System Malfunction - Electric System Malfunction - Electronic System Malfunction - Elevator - Flight Control System Malfunction - Flap - Flight Control System Malfunction - Flight Control System - Fuel System - Ground Maintenance - Hydraulic System Malfunction - Landing Gear System Malfunction - Navigation System Malfunction - Pneumatic System Malfunction - Pressurization - Rudder - Flight Control System Malfunction - Tire Brust - Unknown Technical Problem - Non-Powerplant Parts or Pieces Separation
SYSTEM/COMPONENT FAILURE OR MALFUNCTION (POWERPLANT) (SCF–PP)p.268
Failure or malfunction of an aircraft system or component related to the powerplant. Includes: Failures or malfunctions of any of the following: propellers, propeller system and engine gearbox, reversers, and powerplant controls.
Powerplant Parts or Pieces Separating from an Aircraft. Interruptions of the fuel supply caused by mechanical failures. System/Component Failure or Malfunction (Powerplant) (SCF–PP) subcategories in GACA Electronic Reporting System:
- Cooling System Malfunction - Electric System - Engine Gearbox Failure/Malfunction - Engine Shutdown - Engine Vibration - Hydraulic System Malfunction - Other or Unknown Powerplant Failure/Malfunction - Pneumatic System Malfunction - Powerplant Control Failure/Malfunction - Powerplant part or Piece Separation - Propeller System - Thrust reverser Failure/Malfunction
TAXIWAY INCURSION (TI)p.269
Any occurrence at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the taxiway. According to GACAR Part 1, Taxiway is defined as a path on a land aerodrome established for the taxiing of aircraft and intended to provide a link between one part of the aerodrome and another, including;
Aircraft stand taxi-lane, Apron taxiway or Rapid exit taxiway. Includes: Occurrences while ground and air taxiing for rotorcraft on designated taxiways. Taxiway incursions resulting from the improper navigation of an aircraft at an aerodrome.
Taxiway incursions resulting from an ATC/ATM error. Taxiway Incursion (TI) subcategories in GACA Electronic Reporting System: - With Vehicle - With Person - With Aircraft
TURBULENCE ENCOUNTER (TURB)p.269
In-flight turbulence encounter. Includes: Encounters with turbulence in clear air, mountain wave, mechanical, and/or cloud associated turbulence. Wake vortex encounters. Deviations from assigned altitude or electronic navigation path as a result of turbulence.
Turbulence Encounter (TURB) subcategories in GACA Electronic Reporting System: - Clear Air Turbulence (CAT) - Mountain Wave Turbulence - Turbulence of Cloud-Associated - Wake Turbulence
UNDERSHOOT/OVERSHOOT (USOS)p.270
A touchdown off the runway/helipad/helideck surface during the landing phase only. Undershoot/Overshoot (USOS) subcategories in GACA Electronic Reporting System: - Off the runway Touchdown - Offside Touchdown
UNINTENDED FLIGHT IN IMC (UIMC)p.270
Unintended flight in Instrument Meteorological Conditions (IMC). Unintended Flight in IMC (UIMC) subcategories in GACA Electronic Reporting System: - Aircraft Not Equipped to Fly in IMC - Loss of Visual References - Pilot not Qualified to IMC
UNKNOWN OR UNDETERMINED (UNK)p.270
Insufficient information exists to categorize the occurrence. Includes cases in which the aircraft is missing. Includes those occurrences in which there is not enough information at hand to classify the occurrence or in which additional information is expected in due course to better classify the occurrence.
Unknown or Undetermined (UNK) subcategories in GACA Electronic Reporting System: - Missing Aircraft - Not Enough Information to Classify
WILDLIFE (WILD)p.270
Collision with, risk of collision, or evasive action taken by an aircraft to avoid wildlife on the movement area of an aerodrome or on a helipad/helideck in use. Includes: Includes encounters with wildlife on a runway in use or on any other movement area of the aerodrome.
Birds/Birds activity observed that do not result in in collisions/near collisions. Wildlife (WILD) has no subcategories in GACA Electronic Reporting System.
WIND SHEAR OR THUNDERSTORM (WSTRW)p.271
Flight into wind shear or thunderstorm. Includes: Flight into wind shear and/or thunderstorm-related weather. In-flight events related to hail. Events related to lightning strikes. Events related to heavy rain (not just in a thunderstorm).
Deviations from assigned altitude or electronic navigation path as a result of wind shear Injuries sustained as a result of Turbulence (excluding turbulence caused by wind shear and/or thunderstorms). Injuries sustained as a result of Thunderstorms and/or wind shear.
Wind Shear or Thunderstorm (WSTRW) subcategories in GACA Electronic Reporting System: - Dust Storm - Flying into Hail - Flying into Heavy Rain - Flying into Thunderstorm - Flying into Wind Shear - Lightening Strike - Winds
Section 1. Flight Data Analysis Program (FDAP)p.272
2.7.1.1. PURPOSE AND OBJECTIVES. Under GACAR Part 5 a Flight Data Analysis Program (FDAP) is a required safety analysis program for General Authority of Civil Aviation Regulations (GACAR) Part 121 air operators who operate airplanes with a maximum takeoff mass greater than 27,000 kg. For all other operators the program is voluntary. An FDAP, when implemented, is designed to improve aviation safety through the proactive use of flight-recorded data. Operators will use these data to identify and correct deficiencies in all areas of flight operations. Properly used, FDAP data can reduce or eliminate safety risks, as well as minimize deviations from regulations. Through access to FDAP data, the General Authority of Civil Aviation (GACA) can identify and analyze trends and target resources to reduce operational risks. This section will define the elements of a FDAP program, the FDAP program review process, and the role of the principal operations inspectors (POI) and the role of other aviation safety inspectors (Inspectors) in monitoring continuing FDAP operations.
2.7.1.3. BACKGROUND. Governments, the aviation industry, the International Civil Aviation Organization (ICAO) and industry associations have sought additional means for addressing safety problems and identifying potential safety hazards. Based on the experiences and input from air carriers that operate in a world-wide environment and the input received from government/industry safety forums, the GACA in conjunction with the recommendations of ICAO has decided that the implementation of an FDAP program could help reduce air carrier accident rates below current levels. The value of FDAP programs is the early identification of adverse safety trends, which, if uncorrected, could lead to accidents.
A key element in FDAP is the application of corrective action and follow-up to ensure that unsafe conditions are effectively remediated. The Safety Management System (SMS) processes will play a key role in the identification and application of corrective action.
A. FDAP is a program for the routine collection and analysis of digital flight data generated during aircraft operations. FDAP programs provide more information about, and greater insight into, the total flight operations environment. FDAP data is unique because it can provide objective information that is not available through other methods. An FDAP program can identify operational situations in which there is increased risk, allowing the operator to take early corrective action before that risk results in an incident or accident. FDAP must interface and be coordinated with the operator’s Safety Management System. The FDAP program is another tool in the operator’s overall operational risk assessment and prevention program. Being proactive in identifying and addressing risk will enhance safety.
B. The term “FDAP” means a program for the routine collection and analysis of digital flight data gathered during aircraft operations, including data currently collected pursuant to existing regulatory provisions. The operator must include the FDAP as part of its Safety Management System and the program documentation must include a description of how data are collected and analyzed, procedures and policies for ensuring flight data analysis program is non-punitive and contain adequate safeguards to protect the source(s) of the data, procedures for taking corrective action that analysis of the data indicates is necessary in the interest of safety, procedures for providing the GACA access to the FDAP information, and procedures for informing the President as to any corrective action performed. The POI will monitor trends in FDAP data and the operator’s effectiveness in correcting adverse safety trends.
2.7.1.5. KEY TERMS. The following are key terms that an Inspector is likely to encounter in reviewing an operator’s FDAP program documentation: A. Aggregate Data. The summary statistical indices that are associated with FDAP event categories, based on an analysis of FDAP data from multiple aircraft operations.
B. Data Management Unit (DMU). A unit that performs the same data conversion functions as a Flight Data Acquisition Unit (FDAU), with the added capability of processing data onboard the aircraft. Additionally, this unit has a powerful data processor designed to perform in flight airframe/engine and flight performance monitoring and analysis. Some DMUs have ground data link and ground collision avoidance systems incorporated into the unit.
C. Data Validation. A process during which flight data are reviewed to verify that they are valid, accurate, and free or errors (such as might result from damaged sensors or faulty recording). D. Event. An occurrence or condition in which predetermined values of aircraft parameters are measured. If parameter values are exceeded, the Ground Data Relay and Analysis System (GDRAS) will flag the event for further analysis and record it in a database for trending.
E. Flight Data Acquisition Unit (FDAU). A device that acquires aircraft data via a digital data bus and analog inputs and formats the data for output to the Flight Data Recorder (FDR) according to regulatory requirements. Additionally, many FDAUs have capability that enables them to perform additional processing and distribution of data to Aircraft Condition Monitoring Systems (ACMS), Aircraft Communications Addressing and Reporting Systems (ACARS), Engine Condition Monitoring Systems (ECM), or to a Quick Access Recorder (QAR) for recording/storage of raw flight data. There are many varieties of FDAU, known by a number of different acronyms) but all perform the same core functions.
F. Ground Data Replay and Analysis System (GDRAS). A sophisticated software application that transforms flight-recorded data into a usable form, analyzes the data and detects events, and generates reports for review.
G. Implementation and Operations Plan (Plan). A detailed plan specifying all aspects of an operator’s FDAP program, including how the operator will collect and analyze data, procedures for taking corrective action based on findings from the data, means for providing the GACA with access to de identified aggregate FDAP information/data, and procedures for informing the GACA of corrective actions undertaken.
H. Quick Access Recorder (QAR). An onboard recording unit that stores flight-recorded data. These units are designed to provide quick and easy access to a removable medium on which flight information is recorded. QARs may also store data in solid-state memory accessible through a download reader. It is possible in some modern systems for the functions of the QAR to be integrated into a single solid state recording system that also provides the flight data recorder functions that are required by regulation.
NOTE: Some organizations use the term Flight Operations Quality Assurance (FOQA) instead of FDAP 2.7.1.6. ADDITIONAL GUIDANCE. Additional guidance on FOQA/FDAP can be found in FAA Advisory Circular 120-82.
2.7.1.7. OPERATOR RESPONSIBILITY. An operator may elect to contract the operation of an FDAP to another party but they must retaining overall responsibility for the maintenance and effectiveness of the FDAP.
2.7.1.9. FDAP CONCEPTS.p.274
A. FDAP Is a Proactive Safety Program. Rather than using the vast amount of flight-recorded data only for accident investigation purposes, FDAP focuses on using these data to reduce the risk of, and perhaps prevent, aircraft accidents. FDAP programs seek to discover previously unknown risks and focus the efforts of safety personnel on reducing or eliminating these risks.
B. FDAP Is Focused on Aggregate Trends Rather Than Individual Flights. Through experience, operators with FDAP programs have learned that the primary value of the program comes from trend information based on aggregate FDAP data. Using these data, the operator can identify and correct systemic problems.
C. FDAP Analysis Is a Continuous Process. Data are gathered from the aircraft and used to identify trends. Corrective actions are devised to counter adverse trends. If needed, new corrective actions are devised and implemented. Data are used to evaluate the effectiveness of these new actions. This process continues until the actions are deemed successful, and then data are used to monitor long-term success and ensure there is no recurrence.
D. The Sum of FDAP Is Greater Than Its Parts. The FDAP data collected by an individual operator is valuable and useful in improving safety at that operator. Without the knowledge of what other operators are experiencing, however, an operator may not have a complete picture. The GACA, because it does not generate these data, may not have sufficient information about safety issues confronting the GACAR Part 121 air carrier industry and the airspace system they operate through.
The aggregation and sharing of FDAP data provides even greater opportunities for improvements in safety than any single operator’s program by itself may provide. 2.7.1.11. FDAP Organizational Structure. The operator must put in place an FDAP structure that is capable of taking corrective actions where the data shows they are warranted. This structure must have the support of senior management and the FDAP stakeholders who will provide the human and financial resources needed to effect corrective actions. It needs to be able to coordinate actions across departmental lines, track assigned corrective actions, and hold departments accountable for implementation of agreed-upon actions. FDAP data can be generated, collected, transmitted, and analyzed, but if corrective actions are not implemented to reverse adverse trends, the FDAP program is not successful.
2.7.1.13. FDAP RESOURCES. The planning and implementation of an effective FDAP will result in a significant commitment of both human and financial resources. During the development of an FDAP it is important that the operator understand and be willing to commit the necessary personnel, time, and money needed to ensure that data collected in the program is used effectively. Before embarking on an FDAP, it is useful for an operator to consult with other operators with established FDAP programs to better gauge the resources required for the effort.
2.7.1.15. DEVELOPMENT OF AN FDAP. For existing certificate holders a phased implementation of FDAP will be permitted under the transitional provisions authority of GACAR Part 199 as illustrated in the following diagram. (Figure 2.7.1.1) NOTE: Although the following guidance is designed for the use of existing certificate holders who need to develop an FDAP acceptable to the President, new applicants may find this information useful for reference purposes during the certification process. New applicants will not be given the opportunity to develop a phased implementation plan. For all new applicants, the FDAP elements must be in place at the time of certification.
Figure 2.7.1.1. Planning and Preparation; Implementation and Operations; and Continuing Operations A. Phase I. The planning stage sets the policy and direction for the FDAP effort. As these are developed, the necessary resources are committed to implement the program. The policies, procedures, resources, and operational processes for collecting, managing, and using FDAP data are then laid out in the Plan as the program blueprint, and submitted to the POI to obtain acceptance of the program. Essential activities during the planning stage include:
• Establishment of a steering committee • Defining goals and objectives • Identifying and soliciting input from stakeholders • Selecting technology and personnel • Defining safeguards and events B. Phase II. The second stage is plan implementation. It will begin once the Plan has been approved and will probably start with a limited number of aircraft. It will involve the installation of the equipment, training of personnel, and collection and processing of data from the aircraft. Work during this stage is focused on the validation of the program, including its logistics and security mechanisms. Essential activities during the implementation stage include:
• Implementing and auditing security mechanisms • Installation of equipment • Training of personnel • Involving stakeholders • Collecting and processing airborne data • Analysis and validation of data • Development and documentation of FDAP system procedures • Defining start up criteria C. Phase III. The final stage is continuing operations. Once the operator has validated its processes and the accuracy of its data collection and analysis, it will officially launch the program. Data collected can then be used in trend identification, determination of corrective actions, and monitoring the effectiveness of those actions. Expansion of the program will occur as outlined in the FDAP Plan, or as documented in subsequent FDAP Plan revisions. Essential activities during the planning stage include:
• Taking corrective actions • Conducting periodic program reviews • Tracking costs and benefits • Evaluation of emerging technologies • Expansion of data usage • Communicating FDAP program benefits
2.7.1.17. PROGRAM ACCEPTANCE AND REVISION.p.277
A. FDAP Acceptance. For an FDAP program to be acceptable to the President, the operator must develop and submit an FDAP Plan to the Director, Flight Operations Division as described in this section. The plan must be accompanied by a completed Plan checklist (Figure 2.7.1.2), documenting where the checklist items have been addressed in the plan, and a cover letter addressed to the POI requesting the plan be accepted.
1) Evaluation of the Plan should center on whether or not the structure set up for the FDAP program will realistically address the collection and analysis of data as well as procedures for taking corrective actions. Because of differences in air carriers, the POI is in the best position to assess whether the proposed FDAP program organization and amount of resources dedicated to it are adequate.
2) If the air carrier has elected to include aircraft maintenance monitoring as part of its FDAP program, the POI is responsible for coordinating with the principal maintenance inspector (PMI) in reviewing those portions of the initially proposed Plan, and subsequent revisions thereto, that pertain to aircraft maintenance.
3) The POI should review the proposed plan, and if it is adequate accept the plan. 4) Once accepted the air carrier must implement the plan. B. FDAP Program Revisions. Changes will occur in an operator’s program as it assimilates new technologies, modifies event definitions, and changes structures to meet its program’s growing needs. Changes are likely to be frequent during the early stages of an operator’s FDAP program.
When changes occur the Plan should then be revised to document those changes. 1) Operators should revise previous FDAP Plans in accordance with standard revision control methodology (i.e., revision control page with remove and-replace instructions by page, list of effective pages, and both revision number and revision date on every page of the document). The operator should submit all plan revisions to the POI.
2) Revisions to Plans that have been accepted and implemented do not require letters of approval from the POI. Because such changes can be potentially frequent and voluminous, revisions to plans must be considered to be accepted by the POI, unless the POI notifies the carrier in writing within 45 days of submission that it has not accepted the revision.
2.7.1.19. MONITORING OF ACCEPTED PROGRAMS AND DATA/INFORMATION SHARING.p.278
Once the FDAP Plan is accepted, the POI should monitor the overall progress of program implementation. The POI should participate in the periodic meetings of the operator’s FDAP review team. The focus of these meetings should be on the identification and correction of potential threats to safety uncovered by aggregate FDAP trend information.
A. If at any time the POI determines that the FDAP is no longer acceptable to the President the President will take the necessary steps to ensure the certificate holder makes necessary adjustments or the operator will risk certificate action.
2.7.1.21. GACA Activity Reporting (GAR). In order to track FDAP monitoring activities, the POI must enter the code FDAP on the GAR Data Sheet for work associated with oversight of the operator’s FDAP program.
Existing operators should use the following checklist to prepare their Plans and verify they are including all required materials. The POI will review this checklist to determine that the Plan has specified the items required in an FDAP program. This checklist identifies the minimum requirements of a Plan. An operator’s Plan may contain additional information in excess of these minimum requirements.
When an existing operator submits a Plan for the POI’s review, a completed copy of this checklist should accompany it. NOTE: Although the following checklist is designed for the use of existing certificate holders who need to develop an FDAP that is acceptable to the President, new applicants may find this information useful for reference purposes during the certification process. New applicants will not be given the opportunity to develop a phased implementation plan. For all new applicants, the FDAP elements must be in place at the time of certification.
The “Response” column must be completed for each question. Appropriate responses are “Yes,” “No,” or “NA” (not applicable). All “No” and “NA” responses should include, in the “Comment” column, a brief explanation of each such response.
The “Reference” column is to be completed for each question to which the air carrier provides a “Yes” response. The information provided in the “Reference” column must identify the specific location of the subject item in the Plan (e.g., Plan page number and Plan paragraph number).
FDAP PLAN CHECKLISTp.279
Response Reference Comment General 1. Has the certificate holder requested approval of the Plan in a cover letter addressed to the POI, accompanying submittal of the plan? ◻ Yes ◻ No ◻ NA Response Reference Comment 2. Has a copy of the cover letter and plan been forwarded to POI? ◻ Yes ◻ No ◻ NA 3. Does the Plan identify the personnel, system equipment, and resources it has committed to support the FDAP program? ◻ Yes ◻ No ◻ NA 4. Does the Plan acknowledge the requirement to document revisions in accordance with standard revision control methodology? ◻ Yes ◻ No ◻ NA 5. Does the Plan acknowledge that, following initial POI approval, it must document subsequent modifications to the FDAP program in revisions submitted to the POI? ◻ Yes ◻ No ◻ NA FDAP Plan Response Reference Comment 1. Does the plan clearly specify the goals and objectives of the FDAP program? ◻ Yes ◻ No ◻ NA 2. Does the plan clearly identify the major stakeholders within the air carrier? ◻ Yes ◻ No ◻ NA 3. Does the plan describe air carrier data safeguard and protection mechanisms? ◻ Yes ◻ No ◻ NA 4. Does the plan identify the air carrier fleets (make, model, series) that are targeted for participation in the FDAP program? ◻ Yes ◻ No ◻ NA 5. Does the plan describe the capabilities of the planned airborne equipment for FDAP? ◻ Yes ◻ No ◻ NA 6. Does the plan identify provisions for airborne equipment maintenance and support? ◻ Yes ◻ No ◻ NA Response Reference Comment 7. Does the plan specify a fleet installation plan? ◻ Yes ◻ No ◻ NA 8. Does the plan describe the capabilities of the proposed ground data replay and analysis system (GDRAS)? ◻ Yes ◻ No ◻ NA 9. Does the plan identify provisions for maintenance of the GDRAS hardware and software? ◻ Yes ◻ No ◻ NA 10. Does the plan describe other key technology components of the air carrier’s FDAP program? ◻ Yes ◻ No ◻ NA 11. Does the plan designate a single point of contact to oversee the FDAP program? ◻ Yes ◻ No ◻ NA 12. Does the plan define the air carrier’s organizational structure for oversight and operation of the FDAP program? ◻ Yes ◻ No ◻ NA Response Reference Comment 13. Does the plan describe the roles and responsibilities of key air carrier personnel and teams? ◻ Yes ◻ No ◻ NA 14. Does the plan specify the schedule and timeline for implementing the FDAP program? ◻ Yes ◻ No ◻ NA 15. Does the plan specify FDAP program start‑up criteria? ◻ Yes ◻ No ◻ NA 16. Does the plan describe how it will train key FDAP team members? ◻ Yes ◻ No ◻ NA 17. Does the plan describe how the air carrier will educate its pilots about the FDAP program? ◻ Yes ◻ No ◻ NA 18. Does the plan describe a method for educating senior management and stakeholders? ◻ Yes ◻ No ◻ NA 19. Does the Plan specify procedures for implementing and auditing security mechanisms? ◻ Yes ◻ No ◻ NA Response Reference Comment 20. Does the plan specify a data storage and retention policy? ◻ Yes ◻ No ◻ NA 21. Does the plan specify flight data collection and retrieval procedures? ◻ Yes ◻ No ◻ NA 22. Does the plan describe the procedures for defining fleet- specific events and associated parameters? ◻ Yes ◻ No ◻ NA 23. Does the plan provide the fleet- specific event definitions, including trigger limits for each event’s severity classification? ◻ Yes ◻ No ◻ NA 24. Does the plan describe the procedures for validating, refining, and tracking event definitions? ◻ Yes ◻ No ◻ NA Response Reference Comment 25. Does the plan acknowledge that updates to FDAP event definitions must be included in the Plan revisions submitted to the POI? ◻ Yes ◻ No ◻ NA 26. Does the plan specify procedures for data review and evaluation? ◻ Yes ◻ No ◻ NA 27. Does the plan provide for notifying appropriate air carrier departments of adverse trends revealed by FDAP data flight crew training? ◻ Yes ◻ No ◻ NA 28. Does the plan specify procedures for taking, tracking, and following up on corrective actions? ◻ Yes ◻ No ◻ NA 29. Does the plan describe guidelines for crew member contact and follow-up? ◻ Yes ◻ No ◻ NA 30. Does the plan include a description of how it will document FDAP system procedures? ◻ Yes ◻ No ◻ NA Response Reference Comment 31. Does the plan describe the process for joint POI/air carrier periodic reviews of the FDAP program and associated aggregate data? ◻ Yes ◻ No ◻ NA Figure 2.7.1.2. FDAP Plan Checklist
Section 2. Fatigue Risk Management System (FRMS)p.287
2.7.2.1 GENERAL. Until such time as more detailed guidance is developed for the assessment and approval of a fatigue risk management system, inspectors should consult the following ICAO Documents that address this subject:
- Manual for the Oversight of Fatigue Management Approaches (ICAO Doc 9966), 2nd Edition - Fatigue Management Guide for Airline Operators (Supplement to ICAO Doc 9966, 2nd Edition) - Fatigue Management Guide for GA Operators of Large and Turbojet Aeroplanes (Supplement to ICAO Doc 9966, 2nd Edition) - Fatigue Management Guide for Air Traffic Service Providers (Supplement to ICAO Doc 9966, 2nd Edition) 2.7.2.3 FRMS APPROVAL. The approval of an FRMS is made with the issuance of an operations specification (OpSpec). Specifically, the OpSpec 121.A.010, 125.A.010, 133.A.010, 135.A.010 and 171.A.010 are used for these purposes. Consult Volume 15 of this eBook for further instructions on how to issue these OpSpecs,
Section 3. Flight Safety Documents System (FSDS)p.288
NOTE: This guidance to be developed at a later date.
2.8.1.1 GACA ACTIVITIES REPORTp.289
A. GACAR Part 91.3 (f) stipulated that each PIC is responsible for notifying the nearest appropriate authority by the quickest available means of any accident involving the aircraft resulting in serious injury or death of any person or substantial damage to the aircraft or property.
B. In accordance with GACAR Part 4, service providers are required to report all occurrences to the President. Reporting occurrences is made mandatory for aircraft operators, aerodrome services providers, air traffic service providers, air navigation service providers, and ground service providers. The occurrence reporting requirements are also applicable for repair stations and aircraft training organizations that use aircraft.
C. GACAR §21.5 States, “The holder of a supplemental type certificate, repair design approval, SAPMA or SATSO authorization must have a system for collecting, investigating, and analyzing reports of and information related to failures, malfunctions, defects, or other occurrences that cause or might cause adverse effects on the continuing airworthiness of the product, part or appliance covered by the design approval. Information about this system must be made available to all known operators of the product, part or appliance, and on request, to any person authorized under other associated implementing General Authority of Civil Aviation Regulation (GACAR)".
D. Each aircraft operator must submit service difficulties reports for the occurrence or detection of each failure, malfunction, or defect in accordance with para §121.1553, § 135. 695 and §125.539. In addition, each certificate holder must report any other failure, malfunction, or defect in an aircraft that occurs or is detected at any time, if in its opinion, that failure, malfunction, or defect has endangered or may endanger the safe operation of an aircraft.E. GACAR Part §145.1203 stipulates that a certificated repair station must report to the GACA and the organization responsible for the type design of an article within 96 hours after it discovers any serious failure, malfunction, or defect of that article. The report must be in a format acceptable to the President.
F. In Accordance with GACAR Part § 91. 233(d), aircraft operators must preserve flight recorder records, flight recorders be deactivated upon completion of flight time following an aircraft accident or incident.
The flight recorders must not be reactivated before their records are retained and the operator must keep the flight recorder records for at least 60 working days or, if requested by the President or the Aviation Investigation Bureau (AIB), for a longer period.
G. GACAR Part §133.193 (d) states that the operator must, in accordance with GACAR §109.97, report without delay to the President where the accident or incident occurred— (1) Any incidents or accidents involving dangerous goods and (2) The finding of undeclared or wrongfully declared dangerous goods discovered in cargo or passengers’ baggage.
H. GACAR Part §171.853 on Mandatory Reporting of Occurrences, stipulates that in addition to all accidents, serious incidents and incidents that must be notified and reported to the President as prescribed under GACAR Part 4, each ATS provider must ensure that occurrences covered by Appendix A of part §171 are reported to the President within 24 hours of their occurrence. The para §171.855 (b)(2) on Document Retention states that the log material secured as a result of an incident or regulatory infraction is retained until the investigation or enforcement action is completed. Such log material may only be deleted if agreed by the President.
I. Requirement of reporting occurrences by the aerodrome and heliports operator, and ground service provider are stipulated in §139.135 (a)(8), 139.155 (3), 138.135 (a) (8) ,138.155 (c), 137.135 (a) (9), 137.155 (c) and §151.117 respectively.
2.8.1.3 OBJECTIVESp.290
A. This section is to guide the aircraft operators, aerodrome operators, ANS providers, and ground services organization for recording, notifying, investigating, and reporting occurrences to the President.
Besides, this section also details the following procedures: 1) Details of mandatory, voluntary occurrences reporting and confidential reporting 2) Preservation of accident aircraft, aircraft contents, and records soon after the accident or serious incident 3) Preservation of evidence and records relating to aerodrome accidents, ANS facility malfunctions, and airspace incidents 4) Reporting latent hazards, finding of internal audits/inspections 5) Reporting corrective actions for GACAR oversights findings 6) Reporting of occurrences detected during flight data analysis and reliability analysis 7) Reporting aircraft fleet statistics and operation performance of aircraft operators. Reporting of major defects observed by repair stations.
8) Reporting operational statistics and safety performance of aerodrome, ATS, and Ground services. B. This section describes aviation accident and incident investigation procedures, including safety data collection and analysis details.
C. This section provides guidance and procedures to GACA inspectors to perform the functions related to occurrence reporting, safety data collection and investigation of occurrences. This section also describes the provisions of protecting the identity of reporters, protecting the information received through voluntary reporting and sharing of safety information among the stakeholders without identifying the name of the organization or person.
2.8.1.5 APPLICABILITYp.291
A. The procedures described in this section of the eBook are applicable for reporting accidents, serious incidents, and other occurrences of aircraft registered in the Kingdom of Saudi Arabia and occurrences of foreign registered aircraft observed within the territory of the KSA. The occurrences other than accidents and serious incidents include, aircraft defects, aerodrome or ATC deficiencies, operational deviations, non-compliances with regulations, and latent safety hazards in the aviation system. The occurrence reporting procedures are applicable to KSA registered aircraft operating within the territorial limits or elsewhere, irrespective of its operation base or place of occurrence.
B. This section is applicable to: 1) All certificate holders operating under GACAR Part 121, Part 135, Part 125, and certificate of authorization holders operating under GACAR Part 91 and aircraft repair stations certificated under GACAR Part 145.
2) Training schools approved under GACAR Part 141, and other training organizations approved under
Part 142, 143 if they use aircraft for training purposes.p.291
3) Aerodrome operators certificated or authorized under GACAR Part 139 and water aerodromes operating under the provisions of GACAR Part 137 and heliports operating under 138; Ground Service providers certificated under GACAR Part 151.
4) Organizations dealing with the transportation of dangerous goods as described under GACAR Part 109. 5) ATS providers certificated under GACAR § 171, and Aeronautical Telecommunication Service providers operating as per the provisions of GACAR § 173; and C. This section is applicable to Aeronautical Information Services providers certificated under GACAR § 175, Instrument Flight Procedures Service provider certificated under the provisions of GACAR § 172, and Meteorological Services providers operating as per requirements of GACAR § 179.
Section 2. Definitionsp.293
2.8.2.1 For this volume of eBook, the following definitions are used: A. Aircraft component: Any part, the soundness and correct functioning of which, when fitted on an aircraft, is essential to the continued airworthiness of the aircraft includes any item of equipment.
B. Defect: A condition existing in an aircraft, system, or component arising from any cause other than damage, which would preclude it or another aircraft component from performing their intended functions or would reduce the expected service life of the aircraft or aircraft component.
C. Major Defect: Any defect of such nature that reduces the safety of the aircraft or its occupants. It includes defects that result in an emergency condition or performing any specific maintenance. D. Repetitive Defect: A defect in an aircraft (including its components and systems) which recurs, despite rectification attempts.
E. Maintenance: The performance of tasks required to ensure the continuing airworthiness of an aircraft including, any one or combination of overhaul, inspection, replacement, defect rectification, and the embodiment of a modification or repair other than prefight inspection.
F. Repair: The restoration of an aeronautical product to airworthy conditions to ensure that the aircraft continues to comply with the type design requirements. G. Major Repair: A repair intended to restore the airworthiness conditions and changes appreciably affect airworthiness by changing weight, balance, structural strength, performance, powerplant operation, or flight characteristics.
H. Minor Repair: A repair other than a major repair. I. Operator: A person, organization, or enterprise engaged in or offering to engage in aircraft operation. Note: Operators include, Scheduled, Non-scheduled commercial air operators, State aircraft operator, Private aircraft operator, and any other organization or person engaged in aircraft operation fall under the scope of this definition.
J.Scheduled Commercial Air Operator: An aircraft operator which operates its fleet, whole or part of it, as per a published schedule. K. Approved Maintenance Organization (AMO): An organization approved by GACA as per the requirements of GACAR Part 145.
L. Aircraft Fleet: Minimum of three aircraft of a particular type/ model shall constitute a fleet. M. Level 1 Finding: Any significant non-compliance with GACAR requirements, which lowers the safety standard and seriously hazards Air Safety.
N. Level 2 Finding: Any non-compliance with the GACAR requirements, which could lower the safety standard and possibly hazards Air Safety O. Level 1 Occurrences: Accidents or serious incidents requiring independent investigation by the AIB.
P. Level 2 Occurrences: A significant incident involving circumstances indicating that an accident or a serious incident could have occurred if the risk had not been managed within safety margins. The significant incidents must be investigated within the organization under the monitoring of GACA. Note:
GACA may investigate any occurrences based on the severity of occurrences. Initial notification and reporting are necessary for level 2 occurrence listed in Appendix of GACAR Part 4. Q. Level 3 occurrences: Safety issues that are not covered under level 1 or 2 can become level 1 or 2 occurrences if such safety issues are not corrected. Level 3 occurrences include latent safety hazards, system malfunctions, and procedural deviations.
Section 3. Types of occurrence reportingp.295
2.8.3.1 The occurrence reporting system is broadly classified as a mandatory reporting system or a voluntary reporting system. A. Mandatory incident reporting system facilitates data collection on actual or potential safety deficiencies. The mandatory reporting system involves reporting data on accidents and serious incidents to the AIB. Occurrence data is analyzed and exchanged between the AIB and GACA to make regulatory revisions or to take a policy decision on safety measures and to effectively implement State Safety Program. The mandatory incident reporting system includes reporting of the following occurrences:
(1) Accident – An accident occurs between the time any person boards the aircraft with the intention of flight until all persons disembarked. In the case of an unmanned aircraft, it takes place between the time the aircraft is ready to move with the purpose of flight until it comes to rest at the end of the flight and the primary propulsion system is shut down, and;
(a) A person is injured fatally or seriously as a result of: (i) being in the aircraft, or (ii) direct contact with any part of the aircraft, including parts that have become detached from the aircraft, or (iii) direct exposure to jet blast, except when the injuries are from natural causes, self-inflicted or inflicted by other persons, or when the injuries are to stowaways hiding outside the areas usually available to the passengers and crew; or (b) The aircraft sustains damage or structural failure, which (i) adversely affects the structural strength, performance, or flight characteristics of the aircraft, and (ii) would usually require major repair or replacement of the affected component, except for engine failure or damage, when the damage is limited to a single-engine (including its cowlings or accessories), to propellers, wingtips, antennas, probes, vanes, tires, brakes, wheels, fairings, panels, landing gear doors, windscreens, the aircraft skin (such as small dents or puncture holes), or for minor damages to main rotor blades, tail rotor blades, landing gear, and those resulting from hail or bird strike (including holes in the radome); or the aircraft is missing or is completely inaccessible.
Note 1: For statistical uniformity only, an injury resulting in death within thirty days of the date of the accident is classified, by ICAO, as a fatal injury. Note 2: An aircraft missing means that the searches terminated officially, and the wreckage was not traceable.
Note 3: Section 5.1 of the ICAO Annex 13 describes details of investigating unmanned aircraft. Note 4: Attachment F of the ICAO Annex 13 guides for determining aircraft damage. Note 5: Also referred to as aircraft accident.
(2) Serious incident – An incident involving circumstances indicating that there was a high probability of an accident and associated with the operation of an aircraft which takes place between the time any person boards the aircraft with the intention of flight until all such persons have disembarked. In the case of an unmanned aircraft, it takes place between the time the aircraft is ready to move with the purpose of flight until it comes to rest at the end of the flight, with the primary propulsion system is shut down.
Note 1: The difference between an accident and a serious incident lies only in the result. Note 2: The ICAO Annexure 13, Attachment C, lists examples of serious incidents. The serious incidents listed below are typical examples. The list is not exhaustive and only serves as guidance to the definition of serious incidents.
(a) Near collisions requiring an avoidance maneuver to avoid a collision or an unsafe situation or when an avoidance action would have been appropriate (b) Collisions not classified as accidents (c) Controlled flight into terrain only marginally avoided (d) Aborted take-offs on a closed or engaged runway, on a taxiway, or unassigned runway (e) Take-offs from a closed or engaged runway, from a taxiway, or unassigned runway (f) Landings or attempted landings on a closed or engaged runway, on the taxiway, or unassigned runway (g) Gross failures to achieve predicted performance during take-off or initial climb (h)Fires or smoke in the cockpit, passenger compartment, cargo compartment, or engine fires (i) Events requiring the emergency use of oxygen by the flight crew (j) Failure of aircraft structure or engine, including uncontained turbine engine failures (not classified as an accident) (k) Malfunctions of multiple aircraft systems that affect the aircraft operation seriously.
(l) Flight crew incapacitation in flight (m) Fuel quantity level or distribution situations requiring the declaration of an emergency by the pilot, such as insufficient fuel, fuel exhaustion, fuel starvation, or inability to use all usable fuel onboard (n) Runway incursions classified with severity, Category A. The ICAO Manual on the Prevention of Runway Incursions (Doc 9870) contains information on the severity classifications (o) Take-off or landing incidents. Incidents such as under-shooting, overrunning, or running off the side of runways (p) System failures, weather phenomena, operations outside the approved flight envelope, or other occurrences which caused or could have caused difficulties controlling the aircraft.
(q) Failures of more than one system in a redundancy system mandatory for flight guidance and navigation (r) The unintentional or, as an emergency measure, the intentional release of a slung load or any other load carried external to the aircraft Initial notification of a serious incident. The PIC of the aircraft must notify both the President and Aviation Investigation Bureau (AIB) as soon as practicable. The operator may notify the incident to GACA/AIB if the PIC is injured seriously, or the aircraft is not traceable.
Note: PIC/Operator, instead of the initial notification, may submit a final incident report with all details using the prescribed format as soon as practicable within the notification period. Reporting of serious incidents. The PIC of the aircraft must report the details of the serious incident to GACA and AIB using the prescribed format given in Appendix 1, within 48 hours of the incident. The operator may submit a full report of incidents to GACA/AIB if the PIC is injured seriously or the aircraft is not traceable. Unless otherwise directed by the AIB or the President, all serious incidents must be investigated by the operators to identify root causes and initiate timely rectification actions and the Investigation reports submitted to the AIB and GACA.
(3) Service Difficulty Reports (SDR). A certificated repair station, as per §145.103, must report to GACA within 96 hours after discovering any serious failure, malfunction, or defect of an article of the Operators aircraft. Service difficulty reports must be submitted by the aircraft operators in accordance with
§ 121.1553, § 125.539 and § 135.695. The report must contain at least the following details:p.297
(a) Aircraft registration number (b) Type, make, and model of the article (c) Date of the discovery of the failure, malfunction, or defect (d) Nature of the failure, malfunction, or defect (e) Time since last overhaul, if applicable (f) Apparent cause of the failure, malfunction, or defect; and (g) Other pertinent information necessary for complete identification, determination of seriousness, or corrective action A format for SDR is given in Appendix 2B2 (4) Noncompliance, equipment failures, and latent safety hazards.
The mandatory system includes reporting of noncompliance, System/equipment failures, and latent safety issues that are not covered under accident, serious incident, or SDR. (a) Reporting of equipment and system failures.
(i) Aircraft defects covered in the minimum equipment lists (ii) Breakdown in aerodromes signs, lighting, and marking system (iii) Failure of ground communication and navigation equipment, and (iv) Failures in ATC computers and radar systems (v) Failure of ground services equipment and system The details of aircraft defects, equipment failures in aerodromes, and air navigation systems must be reported in the monthly statistics and safety performance reports.
(b) Reporting non-compliances. Reporting non-compliance to GACA is required if any certificated or authorized organization fails to comply with the GACAR requirements of flight operation, aircraft maintenance, ANS, aerodromes operation, and Ground services. Non-compliances of regulations include:
(i) Management personnel (ii) Human resources, including competency and rating (iii) Aircraft, Tools/equipment (iv) The document, manuals, and procedures (v) Adhering to Quality system, internal audits, and SMS system (vi) Maintenance of records relevant to the certification (vii) Occurrence reporting (c) The occurrences of non-compliances with regulations include:
(i) non-compliances with operating regulations, deviation of operation procedures, or lack of approved procedures in flight operation, aircraft maintenance, aerodrome, and ANS (ii) The internal audit and GACA surveillance identified findings The monthly statistics and performance reports should contain the details of audit findings and remedial measures. However, level 1 findings require remedial actions immediately and reported to GACA.
(d) Reporting of Operational Statistics and Safety Performance. Certificated organizations are required to submit periodic reports on Operational Statistics and Safety Performance as prescribed in Appendices between 3 and 7. The performance reporting includes operators proving details of Flight Data Analysis and ATC recorder analysis that are to be reported to GACA in the monthly statistics and safety performance report as per the periodicity mentioned below:
(i) Monthly performance statistics and safety reporting requirement is applicable to scheduled commercial operators, certificated repair stations, air navigation service providers, certificated aerodrome operators, and certificated ground services providers and annual reports be submitted by the authorization holders in respect of aerodromes and heliports.
(ii) Quarterly performance statistics and safety reporting requirement is applicable to non-scheduled commercial air operators. (iii) Half yearly performance statistics and safety reporting requirement is applicable to operators approved under Part 125.
(iv) Annual performance statistics and safety reporting requirement is applicable to operators approved under Part 91 and training organizations operating aircraft. (e) Mechanical Interruption Summary Report (MISR) As per eBook Volume 4.5.13.5. B, following the receipt of an operator’s Mechanical Interruption Summary Report under (MISR) under §121.1557, the data must be evaluated to identify problem areas and significant trends. If a problem area or trend is evident, GACA will decide on a course of action to investigate and correct the problem. GACA may seek the filing of SDR if it is appropriate to resolve the safety issues by the aircraft manufacturer.
B. Voluntary Incident Reporting system: Voluntary Incident Reporting system is a proactive process of collecting information about safety hazards, which otherwise will not be captured by a mandatory reporting system.
(1) Voluntary occurrence reporting system encourages people to report aviation safety information without any regulatory obligation. The reporters can report GACA any safety issues directly or confidentially. Voluntary incident reporting can be direct reporting or confidential reporting.
(2) In a voluntary reporting system, the information is protected and not used against the reporting persons, or those persons referred in the report. Reporting by an individual from outside the organization will be treated as safety information. This reporting system is non-punitive, error-tolerant, and just to encourage further reporting of safety-related information.
(3) Confidential safety reporting provision is available to various stakeholders such as company employees, management personnel (all levels), customers, traveling public, contractors, vendors, or suppliers; auditors, or airport tenants for reporting of safety concerns using the GACA web-based safety reporting system, e-mail, or hard copy of the report.
(4) Without prejudice to the discharge of the responsibilities, GACA will not disclose the name or details of the reporting person or related persons. Should there be any safety follow-up action arising from the report, the GACA will take all reasonable steps to avoid disclosing the identity of the reporting person or those individuals involved in the occurrence.
(5) GACA regulations safeguard the safe data received and protect the identity of the reporting person to encourage voluntary reporting and nurture a strong safety culture.
Section 4. Reporting of Accidents and Serious Incidentsp.300
2.8.4.1Accidents and serious incidents (level 1) are required to be notified immediately to the KSA AIB as per the requirements of Article 108 Para 2 of the KSA Civil Aviation Law. The KSA AIB is an independent body entrusted with investigating accidents and incidents.
2.8.4.3 Accidents and serious incidents must, in the first instance, be notified (Initial Information) to the AIB / GACA by telephone toll-free (24 hours) numbers published on the website: https://gaca.gov.sa 2.8.4.5 Accident/Incident notifications must be submitted to the President of GACA and the AIB within 48 hours of occurrence as per the requirements stipulated in para §4.11 (a) of GACAR Part 4. The notification should contain as much information about the accident, serious incident as is within the person’s knowledge at the time of submitting the notification.
A. The PIC or Accountable executive must submit a notification. The notification must contain at least the following information. 1) Name of the aircraft operator/owner 2) Date, time, and place of the accident / serious incident; and 3) Type, nationality, and registration marks of the aircraft; and 4) Type of operation; and 5) Nature of the accident / serious incident 6) Position or last known position of the aircraft based on an easily defined geographical point; and 7) Name of the pilot-in-command of the aircraft 8) Last point of departure of the aircraft 9) Next point of intended landing; and 10) Description of the sky condition, precipitation, wind velocity, and visibility; and the number of persons onboard the aircraft 11) Number of fatalities and seriously injured persons including crew members 12) Number of fatalities and seriously injured persons on the ground; and 13) Details of damage to the aircraft B. Refer to the Appendix 1 of this volume of the eBook for the format to notify accidents and serious incidents. The notifications are also filed online at www.aib.gov.sa 2.8.4.7 Disclosure of information (including personal information): GACA and AIB may disclose any information to other organizations or individuals in the interests of safety. Where possible, the GACA and AIB will remove information that directly identifies an individual (i.e., names, license numbers, and addresses). However, GACA/AIB may disclose other non-specific identifiers (i.e., times, dates, and locations for the occurrence of incidents) in the interests of safety.
If the information is the subject of an investigation, GACA/AIB will use such data in accordance with the provision of GACAR Part 4. See also the AIB’s Privacy Policy at www.aib.gov.sa. 2.8.4.9 Information Sharing. The AIB and GACA have developed a mandatory notifications information sharing policy. The policy outlines the requirements for information sharing and provides advice on how it will be used when shared between the two agencies.
2.8.4.11 False or misleading reports. Submission of information known by the reporter to be false, or misleading is a serious offense under GACAR Part 3. Aiding, abetting, counseling, procuring, or urging the submission of false or misleading information is also a serious offense.
CHAPTER 8. OCCURRENCE REPORTING SYSTEMp.302
Section Section 5. Reporting and Investigation of Incidents - General 2.8.5.1 Initial Notification of Incidents. All significant incidents require initial notification, reporting of details, and reporting of investigation to the President. All such reportable events are to be notified to the President within 96 hours of occurrence, as stipulated in GACAR §4.13. The President will evaluate each occurrence report received to decide which occurrence requires investigation by the GACA or Operators. Unless otherwise directed by the President, all organizations must carry out the investigation, identify root causes and take remedial measures for the incidents.
2.8.5.3 The other occurrences (level 3) such as observing latent safety hazards, system malfunctions, and safety procedural deviations identified during internal reporting system, internal audit system, actions on GACA surveillance findings, FDAP, and ATC recorder analysis are to be reported to the President through periodical operational statistics and safety reporting system in accordance with the periodicity mentioned.
A. The following reportable incidents should be reported to the President as soon as practicable, within the period stipulated in the respective Part of the GACARs. 1) All incidents of aircraft operations under the operator’s certification /authorization and related to transportation of dangerous goods are required to be reported under GACAR part 109.
Appendix B of GACAR Part 4 contains samples of reportable incidents of aircraft operations.p.302
2) Incidents related to technical conditions, maintenance, and repair of the aircraft operating under approvals or certificate of authorization. Appendix C of GACAR Part 4 contains the samples of reportable incidents related to aircraft maintenance.
3) Incidents of facility malfunction and promulgated information incidents as appropriate to the below-mentioned service providers: a) ATS providers certificated under GACAR Part 171, and b) Instrument Flight Procedures service provider certificated under GACAR Part 172.
c) Aeronautical Telecommunication Service providers operating under the provisions of GACAR
Part 173.p.303
d) AIS providers certificated under GACAR Part 175; e) Meteorological service providers operating under the provisions of GACAR Part 179. 4) Appendix D of GACAR Part 4 contains the samples of reportable incidents related to air navigation services.
B. Aerodrome operators certificated or authorized under GACAR Part 139, Part 138 and Part 137, and Ground Service providers approved under GACAR part 151 must notify, report, and investigate all incidents.
1) The incident notification must include at least the following information: a) Name of the organization and contact details. b) Date, time, and place of the incident. c) Brief description of events; identify the applicable aircraft, aerodrome, and publication such as map, charts, and procedures.
2) Appendix E of GACAR Part 4 contains the list of reportable incidents related to aerodrome operation, and the Appendix F of GACAR Part 4 listed reportable incidents related to ground services. The aerodrome and ground service provider must use the incident reporting form given in Appendix 2D and 2E, respectively.
2.8.5.5 Incident Investigation by Board 1) All scheduled commercial air operators, in accordance with GACAR §4.17 (b),must establish a Permanent Investigation Board (PIB) to investigate the serious incidents. The Board should consist of the Chief of Flight Safety, Senior pilot, and Maintenance manager.
2) As per GACAR §4.17 (d),serious incidents related to ATS, particularly AIRPROX must be investigated by a board constituted of expert members in the field headed by the Manager/Chief or Dy Chief of the ATS unit.
3) In accordance with GACAR §4.17 (e),aerodrome related serious incidents, especially Runway incursion /Runway excursion must be investigated by a board constituted by expert members in the field headed by the Person in charge of Aerodrome.
4) Ground Services related accidents, especially involving collisions of vehicle/aircraft or injury or death of person(s), must be investigated by a board constituted by expert members in the field from the Ground Services, Aerodrome, and aircraft operator if appropriate.
2.8.5.7 Timeline for the completion of investigation 1) Actual time required for investigation will depend on the complexity of each case. However, to ensure completion of investigation at the earliest, following time limits for investigation of various types of incidents will be adhered to:
a) An engineering incident involving maintenance procedures – 02 weeks b) An operation incident involving of procedures by Operating crew – 02 weeks If an operation incident is accompanied by the failure of an aircraft system/component, an incident report should be submitted well within the timeline to facilitate immediate corrective actions. All efforts should be made to submit the final investigation within four weeks.
c) Investigation of an incident which involves failure of components or systems and where the component investigation report is essential to determine the cause of the incident should be completed within 3 months or as much time as taken by the equipment manufacturer / Overhaul –maintenance shop / Laboratory.
d) Incident involving consultation of external experts / manufacturer participation – 03 months. e) Ground incidents involving a collision between aircraft and vehicle; aircraft and aircraft should be investigated in 02 weeks;
f) Investigation of Runway incursions – 02 weeks g) Investigation of Runway excursions – 30 days h) Investigation of ATC incidents – 30 days i) Any incidents related Ground Services, dangerous good and aerodrome other than vehicle/aircraft collision and runway incursion – 30 days The final investigation/root cause report including, recommendations and remedial actions, should be completed as per the timeline set in 2.8.5.7. If the investigation is not completed within a stipulated timeline, the operator may request the President for an extension of the investigation period up to 3 months, indicating the circumstances for the extension.
Section 6. Reporting and Investigation of Aircraft Operational Incidents.p.306
2.8.6.1 Initial Notification of Incidents. The operator must notify all significant incidents to the President as soon as Practicable. 2.8.6.3 All such reportable events are to be reported to the President within 96 hours of occurrence, as stipulated in GACAR §4.13. The operator must submit the investigation reports and root causes analysis to GACA/AIB within the timelines mentioned in 2.8.5.7 for a specific incident. Sample reportable occurrences are listed in Appendices of GACAR Part 4.
2.8.6.5 Reporting of other occurrences (level 3) such as observed safety hazards, system malfunctions without immediate safety issues (covered under MEL), safety procedural deviations identified during internal reporting system, internal audit system, GACA surveillance, and FDAP are to be reported to the President through periodical operational statistics and safety reporting system.
2.8.6.7 Any person or approval holder’s representative involved in an incident must provide the President with details concerning the incidents and corrective actions. The remedial measures include restoring the facility, rectifying defects, revising procedures, or training of personnel.
2.8.6.9 Incident reporting formats for flight operations, airworthiness, ANS, aerodromes, and ground services aregiven in Appendix 2A through 2E. Specific formats for reporting periodic statistics and safety performance reporting are given in Appendix 3 through 6.
2.8.6.11 Investigation of Incidents. A. Unless otherwise directed by the AIB or GACA, each aircraft operator is responsible for investigating the serious incidents to identify root causes and prevent a recurrence.
1) On discretion, depending upon the nature of the incident, GACA may investigate the incident, or a GACA representative may involve the operator for incident investigation. Unfolding system deficiency, identifying the contributory factors, and preventing recurrence of such incidents are the primary objectives of incident investigation.
2) All scheduled commercial air operators must establish a Permanent Investigation Board (PIB) to investigate the occurrences and submit a report to the President within the timeline mentioned in para 8.2.5.6.
3) The Permanent Investigation Board (PIB) shall consist of the Chief or Deputy Chief of flight safety, or Pilot assigned with duties of flight safety activities if the fleet size of less than three aircraft, a senior pilot on type of aircraft, preferably an instructor or examiner and Manager Quality/Engineer qualified on the type of aircraft. The board shall determine the frequency of its meetings based on the fleet size and monthly average number of incidents.
4) The board must discuss all the occurrences of the intervening period and plan further actions. If the occurrence is of nature that a further investigation serves no purpose, a summary investigation report may be submitted. The flight crews and maintenance crews of incident aircraft may be interviewed if required and recorded their account of the incident. FDR / CVR readout of the relevant parameters, site report, test report of the relevant system to the extent possible must be made available. The PIB report must be prepared as per the format given in Appendix 8.
5) The board may seek GACA, an extension of the investigation period if it cannot complete the investigation within the timeline specified in para 8.2.5.5. The scheduled commercial air operators must inform GACA whether the investigation is closed or open in the monthly Performance reports. The certificate holders must ensure speedy implementation of the safety recommendations made by the Permanent Investigation Boards.
6) Unscheduled commercial air operators, operators under GACAR Part 125, and pilot training schools must have provision to investigate incidents. 7) On completing the investigation, the aircraft operators and service providers must submit an investigation report to the President within the timeline specified in para 8.2.5.5. Upon completing the investigation of the incident, operators may propose any specific actions to the President to prevent the recurrence of similar incidents in the industry.
Section 7. Reporting and Investigation of ANS Incidents.p.308
2.8.7.1 A. General. Unless otherwise advised by the KSA AIB or GACA, all ANS related serious incidents must be investigated by the ANS service providers and investigation report submitted to the President. The reportable incidents are listed in Appendix A to the GACAR Part 171 and
Appendix D of GACAR Part 4. ATS providers must describe procedures in the ATS Procedurep.308
Manual to notify, report, and investigate ATS related air traffic occurrences. The ATS provider must develop procedures for internal reporting of incidents to the manager or responsible person of the unit concerned and further notifying and reporting to the AIB/GACA. Incident notification, reporting, and investigation are explicitly required for incidents related to the provision of air traffic services involving aircraft proximity (AIRPROX) or other serious difficulty resulting in a hazard to aircraft caused by, among others, faulty procedures, non-compliance with procedures, or failure of ground facilities.
B. ATS provider must establish detailed procedures for reporting aircraft proximity incidents and their investigation to identify root causes and devise remedial actions. The degree of risk involved in aircraft proximity must be determined in the incident investigation and classified as “risk of collision”, “safety not assured”, “no risk of collision” or “risk not determined”, or similar categorization of risks.
C. The Air Traffic Service Provider is responsible for the timely investigation of incidents and identifying root causes for improving safety and preventing such a recurrence. However, the AIB or GACA may investigate the incident based on the circumstances, nature, and severity of the incident.
D. The ATS Provider must notify incidents as soon as practicable and submit a report in a prescribed format given in Appendix 2C within 96 hours of the incident. The standardized format provides investigators/authorities with complete information to facilitate an investigation, analysis, and data storage in the GACA database for trend analysis.
The final investigation report must be submitted to the President within the timelines mentioned in para 8.2.5.5. The ATS provider may give a copy of the investigation report, including recommendations to the pilot or aircraft operator concerned, if appropriate, with the proposed remedial actions to be taken. Appendix C of this chapter provides a format for incident reporting of pilots and ATC persons.
1) A pilot must file a report on an air traffic incident after arrival or reconfirm that the pilot reported the incident through a radio message. Note: The incident reporting form, if available onboard the aircraft, may also be used for reporting the incident.
2) A responsible person in the ATS unit must fill up the form upon receiving the incident data via radio, telephone, or teleprinter. 2.8.7.3 Reporting by pilots A. The pilot involved in an incident, in accordance with GACAR §91.109 and§171.596, must proceed as follows:
1) During the flight, use the appropriate air-ground frequency for reporting an incident of significance, mainly if it involves other aircraft. It permits the facts to be ascertained immediately; and 2) Submit a completed air traffic incident reporting form as soon as practicable after landing for:
a) Confirming that a report of an incident made initially by radio, or b) Make an initial report on such an incident if it had not been possible to report it by radio; and c) ATC unit requiring a report of an incident that did not notify initially.
B. The initial report made using radio should contain at least the following information: 1) Aircraft identification 2) Type of incident (AIRPROX, PROCEDURE, FACILITY) 3) Date, time, and position of the incident (UTC) 4) Heading, route, true airspeed; altimeter setting, climbing, descending, or level flight 5) Avoiding action taken if any 6) The other aircraft type and call sign or, if not known, description 7) The other aircraft climbing, descending, or level flight 8) Avoiding action taken by the other aircraft 9) Distance to other aircraft 10) Aerodrome of first landing and aerodrome of destination.
C. The air traffic incident initially reported by radio should be submitted by the pilot to the ATS reporting office of the aerodrome of the first landing. The pilot should complete relevant sections of the form supplementing the details of the radio report as necessary.
Note: The report may be submitted to any other ATS unit if no ATS reporting office in the area. 2.8.7.5 Reporting by ATS A. Following an air traffic incident, as per the requirement given in §171.596, the ATC unit involved must proceed as follows:
1) Identify and designate the incident as per the following guidance: Type of air traffic incident Designation of incident Aircraft proximity
AIRPROXp.310
The serious difficulty caused by faulty procedures or lack of compliance with applicable procedures Procedural The serious difficulty caused by failure or ground facilities Facilities Operational error OE Operational deviation (ATC did not ensure separation) OD 2) Operational deviation (OD). An ATS incident in which ATC did not ensure separation resulted in one of the following:
(a) Maintained less than the applicable separation minima that existed between an aircraft and adjacent airspace without prior approval. (b) An aircraft penetrated airspace under the responsibility of another controller within the same ATS unit or adjacent ATS unit without prior coordination and approval;
or (c) An aircraft, vehicle, equipment, or personnel encroached upon a landing area under the responsibility of another controller without prior coordination and approval. 3) Operational error (OE). An ATS incident in which ATC did not ensure separation resulted in one of the following:
a) The applicable separation minimum was not maintained between two or more aircraft. b) The applicable separation minimum was not maintained between an aircraft and terrain or obstacles; or c) An aircraft landed or departed on a runway closed to aircraft operations after receiving an air traffic control clearance.
4) ATS unit should arrange with the aircraft operator to obtain the pilot’s report on the incident soon after landing if the aircraft is bound for a destination located within the area of responsibility of the ATS unit within the KSA.
5) If the aircraft is bound for an international destination, the ATS unit at the incident area must be informed using the Aeronautical Fixed Telecommunication Network (AFTN). The ATS authority at the destination aerodrome that it should obtain the pilot report of the incident soon after landing.
6) In case of the incident involved in foreign registered aircraft, GACA will notify the civil aviation authority of the State of Registry and the State of the Operators giving all details of the incident using AFTN.
7) The pilot or ATS must complete the air traffic incident form and ensure that the AIB /GACA are notified with all relevant details. 2.8.7.7 Investigation and Documentation A. Immediately following an air traffic incident, all documents and recordings relating to the incident must be impounded. The officers in charge of the ATS unit, Air Traffic Controllers, and supervisors must preserve all documents and records related to the incident. They must collect as many details as possible at the time as they are still fresh in their minds. The root cause of the air traffic incident must be identified at the earliest possible time to take remedial action and prevent a recurrence.
B. The ATS unit that receives the incident reports or observes the incident is usually responsible for investigating the incident. The ATS unit must obtain the following information: 1) Statements by the personnel involved 2) Transcripts of relevant recordings of radio and telephone communications 3) Copies of flight progress strips and other relevant data, including recorded radar data if available 4) Copies of the meteorological reports and forecasts relevant to the time of the incident 5) Technical statements concerning the operating status of the equipment; and 6) Unit findings and recommendations for corrective actions if appropriate 2.8.7.9 ATS Incident Investigation Process A. Unless otherwise directed by the AIB or GACA, all Airprox incidents must be investigated by an Air Traffic Investigating Board (ATIB) constituted by the ATS Operator. If Air Force pilots or Air Force ATCO are involved in an incident, the matter is referred to GACA. GACA may provide coordination for the participation of an Air Force representative in the investigating Board. The ATS investigation procedures must be described in the ATS procedure manual, and incidents are investigated by a Board consisting of Manager/Chief/Deputy Chief of ATSU or ATS QA and a member from CNS acceptable to the President. The officer in charge of the ATS unit, a senior ATS officer, or the ATS QA officer/specialist may act as Board Chairman leader. The ATS experts, specialist officers from flight operations, flight calibration, telecommunications engineering, or other fields may also be considered as board members of incident investigation. In addition, the controller involved in the incident may be allowed to nominate an experienced controller as a member on his behalf. When two ATS units are involved, the unit where the incident occurred should convene the investigation Board and invite the other unit to participate.
B. In case of the pilot or the aircraft operator refuses to provide the information necessary for the proper investigation of an air traffic incident, the investigator should inform the appropriate ATS authority. In case of any foreign civil aviation authority refuse to provide the information necessary regarding the ATS incident, GACA may report such refusal to the appropriate ICAO Regional Office for resolution. Notwithstanding, in both cases above, the ATS unit must proceed with the investigation using available information.
C. The proceedings of an Air Traffic Investigating Board, documents, and records used by it, should be treated as confidential materials. The ATS units must prepare specific initial information report required by the investigators and include, as appropriate:
1) Reports from controllers involved as prepared before leaving the unit on the day of the occurrence. 2) Reports from pilots involved, if necessary, through the aircraft operator’s office; and 3) Relevant voice recordings, flight progress strips, and other flight data, including 4) Recorded radar data if available.
5) Investigating Board must review all evidence, including transcript, FDR Readout (whenever required), statements of all concerned. If required, the Investigating Board will seek clarification from ATCO, CNS/Airport Personnel, pilots, or any other concerned person.
D. The report of the ATS investigating Board must include a summary of the incident and the cause. The report must contain all relevant information in a chronological sequence where appropriate and conclude with a list of findings, conclusions, root causes and, safety recommendations to prevent recurrence of such an accident/incident. The report should include recommendations or corrective actions. The fundamental objective of the investigation is to prevent such accidents, not to apportion blame or liability. Therefore, the Board may not make recommendations on personnel or disciplinary action in the event of controller error because the fundamental objective of the investigation is to prevent accidents, not to apportion blame or liability. For any intentional violation or willful negligence of an individual, the procedures given in the eBook on enforcement may be followed.
E. The following information must be submitted as an appendix to the investigation report: 1) Statements by the personnel involved 2) Transcripts of relevant recordings of air-ground and ground-ground communications 3) Copies of meteorological reports or forecasts pertinent to the incident 4) Copies of flight progress strips and other relevant data to the incident, including recorded radar data, if available; and 5) Any technical statements concerning the operating status of equipment F. On completing the investigation, the ATIB must send the investigation report to the GACA and the operator if appropriate. A copy of the investigation report may also be sent to the CAA of the foreign operator, if applicable. The ATS investigation Board is to promptly complete the investigation and possibly within the time stipulated in para 8.2.5.5, in any case within 90 days of occurrence. The ATS provider may seek the President an extension of the investigation period if the investigation cannot be completed within the stipulated timeline. The ATS provider must submit an investigation progress report to GACA during the monthly statistics and performance reporting.
Section 8. Reporting and Investigation of Aerodrome Incidents.p.315
2.8.8.1 General. The Aerodrome operators must submit initial notifications of all significant incidents that affect aircraft safety significantly. Major failure, malfunction, or defect of aerodrome equipment or systems that have or could have endangered the aircraft or its occupants. Examples of reportable occurrences include, but are not limited to the following:
A. Emergency Declaration or runway blockage B. Improperly marked/closed/ construction interference runway C. Major failures of aerodrome lighting, marking, or signage system D. Failure of the aerodrome emergency alerting system E. Taxiway Incursion or Maneuvering Area Excursion F. Airport Fire/Explosion /Fumes G. Encounters with wildlife on a runway in use or on any other movement area of the aerodrome 2.8.8.3 Initial notification, reporting, and investigation: Initial notification of significant incidents is required as per the regulations laid in the GACAR Part 137, 138, 139, and Part 4.
Unless otherwise directed by the AIB/GACA, the Aerodrome operator is responsible for the timely investigation of incidents and identifying the root causes to take remedial actions for preventing a recurrence. On discretion, depending upon the nature of the incident, AIB/GACA may investigate the incident, or a GACA representative may involve the operator for incident investigation.
2.8.8.5 Internal Mechanism of reporting occurrences:The aerodrome operator may have different methods of reporting incidents internally. The reporting method includes telephone or radio reporting for everyone involved or witnesses to an accident or incident. The aerodrome operator must describe clear procedures in the aerodrome manual/customized procedures manual for collecting information from the aerodrome staff, notifying, and reporting to the AIB/GACA, and investigating incidents. The aerodrome service provider may provide a single, central, easily remembered telephone number. If feasible, online reporting (e-mail notification) provisions may be given for speedy processing of information and investigation. Aerodrome operators must have clear procedures for reporting accidents and incidents promptly. Non-punitive reporting system must be encouraged.
2.8.8.7 Initial Notification: Each Aerodrome operator must notify the President as soon as practicable. The initial notification must provide the following details: A. Name of the aerodrome operator B. Nature / type of incident C. Details of the Aerodrome part/area of occurrence D. Details of the affected operator or aircraft, if applicable E. Any emergency action taken 2.8.8.9 Reporting of Occurrence: Notwithstanding the notification of an incident required under GACAR Part 4, the person involved must provide the President with details concerning the aerodrome incident within 96 hours of the incident. Reporting of incidents must be submitted using a form is given in Appendix 2D. Note: To avoid duplication of notification and reporting, aerodrome operators can report incidents using the prescribed format within the reporting period.
2.8.8.11 Occurrence investigation: Aerodrome operators must investigate all serious and significant incidents and submit the investigation report to the President. On discretion, depending upon the nature of the incident, GACA may investigate the incident, or a GACA representative may involve the aerodrome operator for incident investigation. Occurrences related to runway excursion and runway incursion must be investigated by a board consisting of senior officers with expertise in aerodrome operations. There are four categories of runway incursions:
1) Category A is a serious incident where a collision was narrowly avoided. 2) Category B is an incident in which separation decreases; there is a significant potential for collision, which may result in a time-critical corrective/evasive response to avoid a collision.
3) Category C is an incident characterized by ample time or distance to avoid a collision. 4) Category D is an incident that meets the definition of runway incursion, such as the incorrect presence of a single vehicle/person/aircraft on the protected area of a surface designated for the landing and take-off of aircraft but with no immediate safety consequences.
Runway incursions incidents of Category A and B must be investigated by a board constituted by the aerodrome operator, and the board is acceptable to the president. 2.8.8.13 The investigation must be completed within the period mentioned in Para 2.8.5.7. In case of an incomplete investigation, the aerodrome operator may seek the President an extension of the investigation period. In such cases, the aerodrome operator must submit an incident non-closure report to the President. The report should indicate the reasons for not completing the investigation in the monthly statistic and safety performance reporting.
2.8.8.15 Aerodrome incidents should be investigated to correctly identify the root cause(s) and not blame an individual. A detailed investigation is essential for finding solutions to prevent future accidents and incidents. Often, multiple factors influencing at the same time can cause an occurrence. The investigators must identify all causes of interlinking of hazards/latent issues that resulted in the incident. These factors can be, for example:
A. Misunderstood communication B. Inadequate signage, markings, or lights C. Inadequate training of those involved D. Trained staff not acting in the way they were trained E. Too infrequent refresher training F. Inadequate equipment / mechanical condition / mechanical failure G. Tasks carried out too quickly with inadequate resources H. Failure to use PPE I. Inadequate risk assessment J. Human and organizational factors K. Non-adherence to Standard Operating Procedures (SOP) L. Inadequate response to changing circumstances 2.8.8.17 Aerodrome operators must develop a safety occurrences database that may support operators as an essential resource for understanding the type and nature of the occurrences. The primary use of the aerodrome safety database is for creating situational awareness and preventing occurrences. Understanding events in the past enables steps to prevent the recurrence of a similar event in the future.
2.8.8.19 The safety data will reveal the magnitude of a specific problem. The Safety database reveals the overall costs and consequences of the occurrence. It provides safety trends to direct future preventative actions, pinpoint tasks that are of high risk. A specific trend becomes apparent for any causes, pointing to the activities that need review. Analysis can be presented in several ways:
High-risk area (including ‘hot spots’) maps of accidents, incidents, and occurrence locations. Aerodrome operators are required to analyze risk using trend charts, graphs, and mapping as given below: (A) Graphs of numbers of accidents, incidents, or occurrences per month (B) Graphs of each type of accident, incident, or occurrence per month (C) Graphs of accidents, incidents, and occurrences factored per 1,000 or 10,000 aircraft movements 2.8.8.21 The aerodrome operator must develop procedures for follow-up action to ensure timely completion of rectification action or to act upon the proposed recommendations. The aerodrome operator must have a procedurally driven investigation system. The Aerodrome manual/customized procedures manual must have a detailed procedure for notifying, investigating, and reporting occurrences, including smooth coordination with affected operators. Emergency handling procedures and processes must also be included in the aerodrome manual/customized procedures manual to deal with the aftermath of accidents and incidents. The readiness enables the gathering of pertinent details for subsequent investigation. The aerodrome incident reporting system is to support to:
A. Ensure emergency services attendance B. Establish safe temporary closures of the area affected C. Clean up and return to service D. Communicate with other airport users 2.8.8.23 Investigation of Runway Incursions. Various strategies are available to prevent runway incursions. The methods include: ATS clearance, read back protocols to taxi or enter an active runway, airport signs, and lights; obligations by ATS personal and aircrew to visually scan a runway before entering and direct intervention by ATS personnel, aircrew, or vehicle operators to avert an impending collision. Despite all, a runway incursion occurs when any of these defenses have not adhered.
2.8.8.25 The severity of a runway incursion depends on the circumstances surrounding the event and may be related to various barriers that failed in allowing the runway incursion. Severity and frequency are the two components necessary to calculate risk. Whereas the frequency of occurrence can be derived directly from the occurrence database, the severity of the incident must be assessed by the circumstances surrounding the event. Runway incursions must be notified to the GACA as per the format prescribed in this eBook volume.
Section 9. Reporting and Investigation of Ground Services incidentsp.320
2.8.9.1 General: The Ground Service Providers must submit initial notifications for all safety significant occurrences. Major failures, malfunctions, or defects of ground services equipment or system that have or could have endangered the aircraft or its occupants. If the aerodrome does not have Ground Services Providers, the aerodrome service provider or the concerned air operator must notify, report and investigate all incidents that come under the scope of the aerodrome or air operator.
Examples of reportable occurrences include, but are not limited to the following: A. A collision on the ground between an aircraft and another aircraft, or obstacle or vehicles B. Ground handling equipment or any vehicles causing non-fatal injury to any person C. Vehicles on the taxi/runway while aircraft taxiing D. Pushback, power-back, or taxi interference by vehicle, equipment, or person E. Foreign objects on the aerodrome movement area that has or could have endangered the aircraft, its occupants, or any other person F. Fire, smoke, explosions in aerodrome facilities or vicinities, and equipment that has or could have endangered the aircraft, its occupants, or any other person G. Aerodrome security-related occurrences (for example, unlawful entry, sabotage, bomb threat) H. Removal of boarding equipment while boarding continues leading to the endangerment of aircraft occupants I. Transporting, attempted for transporting or handling of dangerous goods which resulted or could haveresulted in the safety of the operation is endangered or led to an unsafe condition (for example, incident or accident involving dangerous goods as defined in the ICAO Technical Instructions for the Safe Transport of Dangerous Goods by Air (ICAO — Doc 9284).
J. Significant spillage during fueling operations. All accidents and serious incidents are to be notified to the President as soon as practicable but reported within 48 hours in accordance with GACAR Part 4. Other incidents must be notified as soon as practicable and reported the details within 96 hours of the incidents.
2.8.9.3 Initial notification, reporting, and investigation: Initial notification information of significant incidents related to ground services should be submitted in accordance with the procedures laid in the GACAR Part 151 and Part 4. The ground service provider is responsible for the timely investigation of incidents, identifying the root causes, taking remedial actions, and preventing a recurrence. On discretion, depending upon the nature of the incident, AIB/GACA may investigate the incident, or a GACA representative may involve the operator for incident investigation.
2.8.9.5 Internal Mechanism of reporting occurrences: The Ground Service Provider may have different methods of internal reporting incidents. The reporting method includes telephone or radio reporting for everyone involved or witnesses to an accident or incident. The Ground Service Provider may provide a single, central, easily remembered emergency telephone number. At some airports, provisions for online reporting (e-mail notification) may be provided for speedy handling of occurrences. The Ground Service Providers must provide clear procedures for all aerodrome staff to report accidents and incidents promptly. Non-punitive reporting must be encouraged.
2.8.9.7 Initial Notification: Each Ground service provider must notify the AIB/President, the Person in charge of Aerodrome, and the concerned operator, if applicable, as soon as practicable. The following details are to be reported to the President during initial notification of occurrence. The initial notification must provide the following information:
A. An aircraft or its occupants B. Ground support equipment C. Ground services personnel D. Crew members E. Passengers F. Any other person The initial notification must be submitted to the President as soon as practicable, and the detailed report must be submitted with 96 hours of the incident.
2.8.9.9 Reporting of Occurrence: Notwithstanding the initial notification of an incident required under GACAR Part 4, the persons involved in any incident must provide the President with details of the Ground Services related incident within 96 hours of the incident. Incidents must be reported to AIB/GACA using the format is given in Appendix 2E.
2.8.9.11 Occurrence investigation: The Ground service provider must investigate all significant incidents and submit the investigation report to the President. On discretion, depending upon the nature of the incident, GACA may investigate the incident, or a GACA representative may involve the Ground Service Provider for incident investigation.
2.8.9.13 The investigation must be completed within the time stipulated in Para 2.8.5.7 In case of non-closure status of the investigation, the Ground Service Provider may seek the President requesting an extension of the investigation period. In such cases, the service provider must report the President about the incompletion status of the investigation with the underlying reasons in the monthly statistics and safety performance reporting.
2.8.9.15 Ground service providers, including various agencies for their services and other organizations as defined GACAR Part 151 required to report to the President all occurrences as per the requirements stipulated in GACAR Part 4 and identify hazards in the ground handling operation as the requirements described in GACAR Part 5. However, the owner of the equipment may be different organization or person other than the Ground Service providers, any safety risk associated with hazardous equipment must be identified by the Ground Service providers and ensure mitigative action from the owners. Some of the equipment and organizations are listed below:
Organization Equipment Air Operator Aircraft Aerodrome Operator Jetway, visual docking guidance system, marshaller Ground Handling Organization Aircraft stairs, conveyor belts, baggage carts, cargo loaders, cargo dollies, Ground Service Equipment (GSE), pushback truck Vehicle/Equipment Maintenance organization Vehicle, maintenance stairs, maintenance dock, aircraft jacks Fuel provider Fuel/hydrant trucks Catering Company Catering trucks Cleaning Firms Cleaning trucks Toilet service Provider Toilet service truck Potable Water Service Firms Potable water service truck De/anti-icing Agency De/anti-icing truck/rig 2.8.9.17 It is the responsibility of all ground services personnel to ensure that all security and safety-related occurrences are reported to their supervisors as soon as possible to inform the concerned operator, flight crew, and the President of the procedures described in the ground services manual.
2.8.9.19 The service provider must retain the records of accidents/incidents investigation for a minimum of ten years. 2.8.9.21 A Ground Services Supervisor must be assigned to look after the notification, reporting, and organizing investigation activities. In case of an accident, the Supervisors must inform and coordinate with various personnel as per the set procedures of emergency handling. In the event of an accident and incident, the following actions are to be taken immediately:
A. Do not endanger the personal safety B. Prevent further risks to other personnel C. Deal with any injuries to personnel as per the procedures described D. Request appropriate assistance E. Secure the occurrence spot by preventing movement F. Collect suitable photographic evidence of the incident 2.8.10.23 Personnel in the Ground Service Provider must familiarize their duties and responsibilities described in the ground services manual. The responsibility includes being familiar with the local safety and emergency response plan for accidents, incidents, or other emergencies that may take place during aircraft ground services operations. The occurrence reporting procedures and the emergency response procedures must be in line with the GACAR part 5 and acceptable to the President. Reportable occurrences include but are not limited to:
A. a stowaway/forbidden items identified on airside B. unattended or left luggage/baggage located within the secure airside perimeter C. a flight dispatched where security measures do not meet the applicable passenger or baggage security regulations; injuries to the air terminal or other personnel conducting services for the ground handling organization or (aircraft) service provider D. undeclared dangerous goods discovered E. damage to an aircraft F. evacuation of a terminal building or other airside location G. potential hazards which may cause injury to passengers or ground personnel 2.8.9.25 Investigation of all significant incidents, especially Ground Collision Incidents/ Vehicular incidents not involving aircraft, must be investigated by a Board consisting of representatives from the involved airline flight Safety department and Safety Investigation Coordinator/Supervisor of Ground Service Provider. The Ground Services Manual must have descriptions of procedures for incident notification, investigation, and reporting acceptable to the President.
2.8.9.27 The air operator will investigate the incidents involving an aircraft damage. The investigation may be conducted by the GACA depending upon the injury to personnel/damage to the equipment/aircraft/structure.
2.8.9.29 Vehicular incidents not involving aircraft damage must be investigated by a team consisting of representatives from the involved air operator flight safety department and the Safety Investigation coordinator/GSD department of the Aerodrome operator. For private operator or Part 91 operator, GACA may, at its discretion, investigate the incident.
Incidents involving aircraft will be investigated by the respective aircraft operator. Depending upon the injury to personnel/damage to the equipment /aircraft/structure, the investigation may be conducted by AIB/GACA. The criteria for the involvement of GACA on investigation may depend upon the serious injury/fatality or substantial damage to the aircraft and equipment /structure associated with the aircraft.
Section 10. Dangerous goods accident and incident reporting and investigationp.326
2.8.10.1 A dangerous goods accident is an occurrence associated with and related to the transport of dangerous goods which results in fatal or serious injury to a person or major property damage. For this purpose serious injury is an injury which is sustained by a person in an accident and which:
(1) requires hospitalization for more than 48 hours, commencing within seven days from the date the injury was received; or (2) results in a fracture of any bone (except simple fractures of fingers, toes or nose); or (3) involves lacerations which cause severe haemorrhage, nerve, muscle or tendon damage; or (4) involves injury to any internal organ; or (5) involves second or third degree burns, or any burns affecting more than 5 per cent of the body surface; or (6) involves verified exposure to infectious substances or injurious radiation.
2.8.10.3 A dangerous goods incident is an occurrence, other than a dangerous goods accident, associated with and related to the transport of dangerous goods, not necessarily occurring on board an aircraft, which results in injury to a person, property damage, fire, breakage, spillage, leakage of fluid or radiation or other evidence that the integrity of the packaging has not been maintained. Any occurrence relating to the transport of dangerous goods which seriously jeopardises the aircraft or its occupants is also deemed to constitute a dangerous goods incident.
2.8.10.5 Any type of dangerous goods occurrence must be reported, irrespective of whether the dangerous goods are contained in cargo, mail or baggage. 4. Any accident or incident involving dangerous goods must be reported using a form in the Appendix 2G. The form must also be used to report any occurrences when undeclared or mis-declared dangerous goods are discovered in cargo, mail or unaccompanied baggage or when accompanied baggage contains dangerous goods which passengers or crew are not permitted to take on aircraft.
2.8.10.7 Any accident or incident involving dangerous goods must be notified to the President as soon as practicable. The initial notification must contain the brief details of the accident/incident, operator/aircraft involved, and death/injury/loss of property due to accident. An initial report in a prescribed form given in Appendix, which may be made by any means (email, fax or hard copy), must reach the President within 48 hours of the occurrence.
2.8.10.9 Copies of all relevant documents and any photographs related to the occurrence must be attached to this report. All dangerous goods, packagings, documents, records relating to the occurrence must be preserved retained for evidence until investigation is completed.
2.8.10.11 The operator of an aircraft carrying dangerous goods that is involved in a dangerous goods related accident or serious incident must constitute a committee of experts to investigate the accident /serous incident and submit the investigation report to the President 90 days from the date of incident.
Any dangerous goods accidents and serious incident other than onboard occurrence must be investigated by a committee of experts constituted by the concerned ground service agency and submit the investigation report to the President 90 days from the date of incident.
2.8.10.13 Ground service providers must submit incident summary in the periodic reports.
Section 11. Safety Occurrence Reporting Governance Processp.328
2.8.11.1. Purpose. This section describes the entire process of occurrence reports’ handling by the GACA Safety & Risk Department (SRM) and the other relevant respective departments in a coordinated manner to ensure effective implementations of GACAR Part 4 and this chapter.
GACAR Part 4 stipulates the requirements for occurrence reporting on GACA regulated entities who are responsible for submitting safety occurrence reports via the GACA electronic reporting system. In cases where the system is not available, safety reports should be sent by email to (sd@gaca.gov.sa). This section governs the internal GACA processing of these safety reports from receiving the reports through their closure by the respective GACA department.
2.8.11.2. Stakeholders. The key stakeholders involved in the Safety Report Governance process are the Executive Vice President (EVP), Respective Departments within the Aviation Safety and Environmental Sustainability sector, the Safety Risk Management (SRM) general department, and certificate holders.
2.8.11.3. Safety Occurrence Reports Governance Process. The Safety Report Governance process follows the flow chart depicted in Figure 2.8.11.1. The process is split into three main categories: the Industry Organizations (or certificate holders), the SRM Department, and the Respective Departments (RDs).
2.8.11.3.1. Initial Report Submittal, Validation, and Assignment. According to GACAR Part 4, GACA regulated entities are obligated to provide GACA with mandatory reports and are encouraged to provide GACA with the voluntary reports. Once the certificate holders have submitted the relevant safety report, whether mandatory or voluntary, the SRM Department, through its Risk Specialists, will initiate an initial report validation. The report validity check includes ensuring that the report contains the necessary information and proper category. Once the validity check has been completed, the Risk Specialist will reassign the report to the respective Department to verify, validate and close as detailed in 2.8.11.3.2.
This step shall be completed within 3 business days from the report submittal by the regulated entity. 2.8.11.3.2. Report Analysis, Validation and Closure Estimation. Once the report has been assigned to the GACA Respective Department, the person in charge of the report within the Respective Department will conduct a thorough validation and analysis of the report.
The analysis and validation of report must focus on the following information: (a) The report contains all necessary data filled as per the format. (b) Observation/defect/incidents are written clearly to enable proper investigation.
(c) The root causes (preliminary or final) conducted by the certificate holders. (d) Any immediate containment actions taken by the certificate holders. The validation and analysis will include a detailed review and analysis of the report and request any further clarification or information from the concerned certificate holder and/or reporter. The regulated entity is required to furnish details within the timeframe set by the person in charge.
The person-in-charge will request from the certificate holder a safety risk analysis of the report, including determination of the Root Cause Analysis (RCA) and a Corrective Action Plans (CAPs). The person- in-charge will agree on an appropriate timeline for the certificate holder to furnish details of the requested RCA and CAPs.
Note: Actual time required for determination of RCA and CAP will depend on the complexity of each case. Time limits for investigation of various types of incidents are detailed in Section 2.8.5.7. This step shall be completed within 5 business days from the report submittal by the SRM.
2.8.11.3.3. Closure of Safety Reports. The person-in-charge shall document the agreed CAPs in the GACA electronic reporting system and carry out a regular follow-up with the regulated entities on the status of CAPs implementation based on the closure dates agreed. Once the remedial actions are completed by the regulated entity, the respective department shall verify the report and carry out, if required, a spot inspection to confirm action is completed.
If the CAPs have not been closed by the agreed due date, the SRM Department will escalate this to the EVP in the weekly escalation report. Extension to the accepted due date may be extended by the Respective Department’s General Manager in writing. The new deadline shall be updated in the GACA electronic reporting system. No further extension will be given without approval from the EVP.
2.8.11.3.4. Safety Report Governance Process Assurance. The SRM Department may conduct assurance over the safety report governance process to ensure compliance of This process.
Section 1. Reporting of aircraft defectsp.331
2.9.1.1 A. Applicability: All aircraft operators approved under GACAR Part 121, 125, 135, and
Part 91 and maintenance organizations approved under Part 145 must have a system to ensure thatp.331
flight crews or maintenance crews report all defects for timely troubleshooting and rectification action. 2.9.1.3 Initial Notification of Defects and Mechanical Interruptions. As per the requirements stipulated in §4.19, aircraft operators must notify all major defects to the President as soon as practicable. However, the contracted repair station may notify and report major defects if the operator transfers such responsibility to the maintenance organizations by written agreement. The occurrence reporting procedures must be described in the operator’s maintenance control manual and the repair station’s maintenance procedure manual. Examples of the reportable incidents are given below:
A. Failure of primary flight instruments B. Hot start /Hung start with starting EGT/TGT/ITT exceedance requiring maintenance C. Cracks, permanent deformation, or corrosion of aircraft structure, if more than the maximum acceptable to the manufacturer limits; and D. Aircraft components or systems failure that resulted in taking emergency actions during flight (except shutting down an engine) E. Suspected bird strike or foreign objects ingestion or icing detected from power loss;
F. Landing gear extension or retraction snags; brake system failures, and; tire burst G. Aircraft structure damage that requires a major repair H. Engines removed prematurely because of malfunction, failure, or defect I. Cases of propeller featherings in flight J. False fire warnings during flight; and K. Engine exhaust system that causes damage during flight to the engine, adjacent structure, equipment, or components 2.9.1.5 All level 3 occurrences such as aircraft defects covered under MEL / CDL / mechanical interruptions and other minor defects require periodic reporting. The reporting period is every month for scheduled commercial operator, quarterly reporting for unscheduled commercial air operators, six monthly reporting for Part 125 operators and annual reporting for Part 91 operators.
However, the rectification actions related to MEL/CDL defects must be included in the periodic fleet statistics and safety performance report. A. The operator's procedures must describe how to record and rectify defects. The procedure manual must have a detailed description of occurrence data collection and rectification, incident investigation, on operations where the occurrence procedures are more demanding to deal with the commercial operations. The defects observed during maintenance at the repair shop/line station are also tracked for reliability calculation. A format is given in Appendix 2B3 (non-routine card) to track defects during aircraft maintenance or component shop visits. The operator's procedure manual must contain detailed procedures for handling occurrences recorded in the technical log-book and planning rectification actions by the operator’s maintenance control center.
B. Classification of Defects: Aircraft operator must review, analyze, and classify defects as major or minor by certificated mechanics. Examples of defects classified under the major category are listed in
Appendix G of GACAR Part 4.p.332
C. Where an aircraft operator has contracted the maintenance of its aircraft as per the provisions of §121.681, where the operator is responsible for maintenance quality, it must be the responsibility of the aircraft operator to report all such defects observed during the maintenance of aircraft.
D. Reporting by Contracted AMO: However, the aircraft operators and the contracted GACA certificated AMO may evolve a system as per §145.103, for assigning a clear responsibility of reporting, investigation, and forwarding of reports to the GACA. The operator must submit detailed information on major defects encountered on aircraft and defects statistics to GACA in the periodic fleet statistics and performance reporting. The reporting procedures must be described in the Operators Maintenance Manual/Maintenance Control Manual if the operator maintains the aircraft.
E. The defect reporting procedures must be described in the Maintenance Procedure Manual if the operator has contracted the maintaining activities to the approved Repair Station. The Repair Station must maintain a record of all the defects observed on the operator’s aircraft. The repair station will produce all defect records for scrutiny by GACA officers when required.
F. Initial notification: All defects classified as major requiring major repair or attracting public attention must be intimated to the President as soon as practicable by the aircraft operators or Repair Station if the responsibility is transferred by agreement. The written information must contain at least the following details.
1) Name of the aircraft operator / AMO 2) Aircraft type and registration No. 3) Date and place of occurrence of the defect 4) Details of the defect(s) and the rectification action taken G. The aircraft operator/ AMO must report major defects using the format given in Appendix 2B. The major defect report must be submitted within 96 hours of the occurrence.
Section 2. Review/analysis/investigation of Aircraft Defectsp.334
2.9.2.1 All aircraft operators must develop a system for reviewing, analyzing, or investigating aircraft defects. Defects must be studied, diagnosed, and rectified by adequately experienced and certificated mechanics. In case of defects rectified in transit station or line station, the adequacy of the rectification work, especially for repetitive defects, must be reviewed as soon as the aircraft returns to its main base. The procedures must be in place to ensure that defects are rectified well within the time limitations stipulated in the MEL.
2.9.2.5 Scheduled Commercial Air Operators. As per the provisions of §121.1557, each scheduled commercial operator may have a system of conducting the review of mechanical interruption on daily basis to analyze the nature and circumstances of defects and mechanical interruption. The review meeting must include representations from Flight operation, maintenance / Continuing Airworthiness, Planning / Logistics /Stores. A representative from GACA may participate in the review meeting based on the seriousness of the defect and mechanical delays.
2.9.2.7 Operators other than Scheduled Commercial Air Operators. For aircraft operators other than scheduled commercial air operators, the periodicity of the review meeting is determined based on the complexity and fleet size of the operator. The operator shall decide the periodicity for the review meeting in consultation with the GACA.
2.9.2.9 Representative from GACA, Airworthiness Department, may associate with the aircraft operator's review meeting and seek additional information related to defect rectification or mechanical interruption.
2.9.2.11 Mechanical Interruption Summary Reports (MISRS). GACAR §121.1557 requires operators to submit MISRS, providing the details of the flight delay in reaching the scheduled destination because of mechanical difficulties. The MISRS is one of the major indicators of deficiencies in the effectiveness of the operator’s maintenance program. Moreover, analysis of these reports is one of the GACA’s most valuable means of verifying the effectiveness of the operator’s maintenance program. The MISRs of the preceding periods are reviewed to detect trends or continuing irregularities. The trends and irregularities may indicate problem areas in maintenance, operational procedures, or the training system affecting the reliability of its aircraft. Detailed procedures on review an Operator’s Mechanical Interruption Summary Report for Parts 121 and 135 are described on eBook Vol 4, Chapter 5, and Sec 13.
2.9.2.13 As a part of the Continuing Airworthiness Management System, an operator must nominate a senior technical person for monitoring and supervising defect investigation at the repair station. Aircraft Operator and Maintenance Organization must describe procedures in the maintenance control manual/maintenance procedure manual to monitor defect rectification. To assist the supervisor in defect investigations in various technical areas, the aircraft operator must have an adequate number of technical persons, approved by the Maintenance Control Supervisor as per the qualifications and experience norms stipulated by GACA.
2.9.2.15 Unscheduled Commercial Air Operators. The maintenance Control Supervisor or a representative from maintenance control may perform or supervise the defect investigation and analysis of mechanical interruptions as referred eBook Volume 4.
2.9.2.17 All defects, irrespective of the severity, as major or minor and including repetitive defects, must be accounted for computing monthly statistics. The statistics will determine components/ systems reliability indices of scheduled commercial air operators. Each repetition of the snag must be considered a separate defect for the computation of the reliability index.
Section 3. Investigation of Major Defectsp.336
2.9.3.1 Unless otherwise advised by AIB/GACA, as per the provisions of GACAR §4.21, all major defects must be investigated by the operators and submitted a report to the President. Investigating of all defects, particularly major defects, must be completed expeditiously to take corrective action at the earliest possible timeframe. If the investigation process is likely to take longer than one month, an investigation progress report must be submitted to the President every month until finalizing the report. Operators must use all possible resources and efforts to complete the investigation within the timeline mentioned in 2.8.5.7.
2.9.3.3 The major defect must be investigated by the aircraft operator associated with the maintenance organization or aircraft manufacturer and a representative from the GACA (Airworthiness Department).
The representative from GACA may seek the operator/owner of the aircraft to submit components, worksheets, documents, and information related to the defect, for investigation. 2.9.3.5 The aircraft operator/owner must submit investigation reports to the President soon after finalization. The final report must contain at least the following information:
1) Identification of parts/ systems involved 2) The apparent or actual cause of the defect 3) Life of affected component since new and since the last inspection ,in terms of flight hours/ landings/cycles 4) The preventive measures if any 5) Any disciplinary action, if taken against any personnel 6) Whether the aircraft operator considers the investigation is closed or open, and if the investigation progresses, the time required to complete the investigation.
Note: The purpose of the investigation is to avoid recurrence. Thus, raising the standard of airworthiness and enhancing the aircraft safety level. In good spirit, the efforts of investigators should be on determining the cause of the defect rather than identifying the person who caused it. However, if confirmed that the occurrence was caused by careless and casual attitude or due to willful negligence of technical personnel, the operator should take disciplinary action against the erring person. The GACA may be consulted to avoid duplicated penal action. Any legal action should be commensurate with the seriousness of the offense, keeping in view the past professional record of the offender.
Section 4. Submission of Defect Reportsp.338
2.9.4.1 Scheduled commercial air operator/ AMO must submit a report every month about investigation results of all the defects, whether the defects are major or minor, collectively to determine, weakness, if any, in the basic design of a component or the layout of a system or the maintenance technique. If weaknesses are detected, then necessary corrective action must be taken by the aircraft operator / AMO under intimation to the President.
2.9.4.3 Data Analysis: As per the stipulation of GACAR §4.7, scheduled commercial air operators and repair stations must maintain defect databases and provide access to the GACA. Organizations depending on the size and complexity, must designate one or more persons to handle independently the collection, evaluation, processing, analysis, and storage of details of occurrences reported. The Data Analysis department of GACA may use such databases of certificate holders to evaluate the safety trend of the operators and safety trends at the national level as a part of the State Safety Programme. Further requirements on maintaining defect database and SDR database are described in eBook Volume 4 and 2.
2.9.4.5 Service Difficulties Report: The certificate/authorization holders under GACAR Part 121, 135 and 135 must submit GACA a Service Difficulty Reports as per the procedures laid down in §121.1553, §135.695, and §125.539 respectively. Approved maintenance organizations must also submit SDRs for observing any occurrences, as per the procedures laid down in §145.103.
2.9.4.7 Being an alert in nature, the initial SDR is notified through telephonic messages, e-mail, or written reports. Subsequently, a detailed SDR must be submitted using a specific format. The SDR notification must contain at least the following details.
1) Name and address of aircraft owner 2) Whether accident or incident 3) Related SBs, service letters, ADs; and 4) Disposition of the defective parts 2.9.4.9 The President may require any operator, notwithstanding the requirements stipulated in this GACAR Part 4, in the interest of the safety of aircraft, to submit the following items:
1) Full details of any defect(s),or 2) Any component associated with the defect or delay investigation. 3) The said components shall not be disposed of in any manner without the prior approval of the concerned Airworthiness Office.
2.9.4.11 SDR must be submitted to the TC holders for faults, malfunctions, defects that may cause service difficulties, or any adverse effects on the continuing airworthiness of the aircraft. Aircraft operators and Repair Shops must report to the manufacturers/type design organization of the aircraft/engine/propeller /system/ components within 96 hours of the occurrence. The detailed procedures are covered in §121.1553, Service Difficulty Reports. The operator must forward a copy of the SDR to the President.
2.9.4.13 The certificate holder must submit the reports to the TC holder using the form given in Appendix 2B2 and a copy forwarded to the President. The service difficulty reports must include the following information:
A. Type and identification number of the aircraft B. The business name of the air operator C. The date, flight number, and stage during which the incident occurred (for example, preflight, takeoff, climb, cruise, descent, landing, and inspection) D. The emergency procedure effected (for example, unscheduled landing and emergency descent) E. The nature of the failure, malfunction, or defect F. Identification of the part and system involved, including available information related to type designation of the major component and time since overhaul;
G. Apparent cause of the failure, malfunction, or defect (for example, wear, crack, design deficiency, or personnel error); H. Whether the part was repaired, replaced, sent to the manufacturer, or other action taken;
I. Whether the aircraft was grounded; and J. Other pertinent information necessary for complete identification, determination of seriousness, or corrective action. 2.9.4.15 When the certificate holder gets additional information, including information from the aircraft or engine manufacturer, component manufacturers, or other agency, concerning a report required by this section, it must expeditiously submit it as a supplement to the first report and reference the date and place of submission of the first report.
2.9.4.17 Major Defect and Repair of Leased aircraft: For reporting occurrences on leased aircraft, the following significant incidents require immediate notification to the President, State of Registry, and manufacturer/Type Design Organization by telephone or written report 1) Primary structure failure 2) Control system failure 3) Fire in the aircraft 4) Engine structural failure; or 5) Any other condition that is considered an imminent safety hazard
Section 5. Occurrence Investigation by GACAp.341
2.9.5.3 Air operator must investigate all occurrences to identify root causes and to speedy remedial action. However, depending upon the type of occurrence GACA may investigate the occurrence to The reportable defects will normally be investigated by GACA in association with operator as mentioned in para 2.9.2.9, Vol:02. GACA Airworthiness section may ask the operator to provide components, worksheet, documents and information connected with defect of such investigation.
During the investigation the GACA inspector should: a. Check relevant records. b. Study defect history. c. Discuss the matter with concerned personnel. d. Study the applicable procedure. e. Check deviation from prescribed requirements / procedures.
f. Check if inspection frequency and inspection techniques are appropriate. g. Analyze the failure pattern. h. Be able to suggest appropriate remedy for the problem. 2.9.4.27 Purpose of investigation is to avoid recurrence of defects thus raising the standard of Airworthiness and enhancing the level of safety of aircraft. Efforts of investigators should be to determine the cause of the defect, rather than who caused it. However, during the investigation, if it is determined that the defect was caused by careless attitude or due to willful negligence of technical personnel, then disciplinary action by the employer, or if deemed necessary by GACA against the erring employee(s) should be taken. If the GACA Inspector initiates the penal action he should examine if:
a. The individual has adequate training. b. The individual has adequate expertise to do the job. c. The environmental conditions have contribution towards the discrepancy. d. Individual was under some pressure.
e. Up- to- date literature was available. f. Working procedure being followed is proper. g. Required tools / equipment were available. h. Required spare parts were available. i. There was excessive workload on the individual.
j. Time available to perform the job was adequate. 2.9.4.29 It would be appropriate to interview the individual who has committed the discrepancy and in case any action is to be taken, against the individual, he should be given a chance to file his explanation.
The penal action should be commensurate with the seriousness of the offence and also keeping in view of the past professional record of the offender.
Section 6. Areas of Investigationsp.343
2.9.4.31 Generally, three (03) areas that Airworthiness Inspectors are generally responsible for investigating are aircraft accidents, incidents, and enforcements. A. Accidents. Airworthiness Inspectors may be required to conduct on-site accident investigations when serious injuries or fatalities have occurred. The inspector may work closely with the AIB or be solely responsible for the investigation.
B. Incidents. AWIs are responsible for the investigation of incidents, as appropriate. Some of the incidents that require investigation are as follows: a. Foreign air carrier incidents b. Reports of emergency evacuation c. Incidents involving hazardous materials d. Noise complaints e. Damage caused by a civil aircraft C. Enforcement. Airworthiness Inspector(s) are required to investigate, analyze, and report enforcement findings. In situations that involve alleged noncompliance with the GACA rules and regulations are required to make recommendations concerning enforcement action.
2.9.4.33 It is important that airworthiness investigation reports are completed by the authorized GACA Inspectors and forwarded in a timely manner while keeping in view the following elements are well covered in the report.
a. Details of Occurrence b. Investigation/ Finding c. Root Cause Analysis (RCA) d. Action Taken e. Recommendations
Section 1. Flight Operation Performance Reportingp.344
2.10.1.1 Performance Measurement in Flight Operation Systems. This chapter outlines the procedures, methods, and formats required to be used by aircraft operators for periodic safety reporting to the GACA.
The chapter also describes the procedures for gathering occurrences data using the Flight Data Analysis Programs (FDAP). The data collection procedures described in this chapter are in line with requirements stipulated in GACAR Part 5, safety management system, and GACAR Part 4, occurrence reporting system.
2.10.1.3 Reporting of Flight Operation Statics and safety performance data supports GACA for analysis of safety risks and safety trends at the state level as the agenda outlined in the State Safety Program.
Operational statistics of all operators is a powerful tool to identify state-level adverse trends. The database supports initiating policy measures to reverse the risk trends, if any. The data collected through operational statistics is also aimed towards carrying out performance-based surveillance and strengthening performance-based regulations.
2.10.1.5 Data Sources for flight operation performance analysis: Safety data sources for flight operation performance analysis includes: A. Flight Data Analysis Reports B. GACA Periodic oversight findings and remedial action reports C. Organization Internal Audit reports D. Accident and Incident Investigation data 2.10.1.7 Aircraft operators must collect, organize, and analyze the safety data to identify safety trends within the organization in accordance with the requirements stipulated in GACAR §4.7. The safety data analysis group in GACA consolidates all data from the operators for state-level safety trend analysis at regular interval.
2.10.1.9 Flight Data Analysis Program (FDAP). Flight Operations Quality Assurance (FOQA) is often referred to as FDAP. The detailed procedures for FDAP approval requirements are described in the eBook Volume 2, Safety Management System. Under GACAR Part 5, implementing Flight Data Analysis Program (FDAP) is a requirement for the GACAR Part 121 air operators who operate airplanes with a maximum takeoff mass greater than 27,000 kg. An FDAP, when implemented, is designed to improve aviation safety through the proactive use of flight-recorded data analysis outcomes. Operators will use these data to identify and correct deficiencies in various areas of flight operations. FDAP data analysis output, if used meticulously, can reduce, or eliminate safety risks. Through access to FDAP data, the President can identify and analyze trends and target resources at the state level to mitigate operational risks.
2.10.1.11 Flight Data Acquisition System. The scope of the FDA depends on the type of data acquisition system installed on the aircraft. Data is obtained from the aircraft digital systems by a Flight Data Acquisition Unit (FDAU) and routed to the crash-protected Digital Flight Data Recorder (DFDR) and easily removable recording medium, Quick Access Recorder (QAR), or in some aircraft wireless QAR.
Flight Data Acquisition Unit acquires aircraft data via a digital data bus and analog inputs and formats the data for output to the Flight Data Recorder (FDR) according to regulatory requirements. Many FDAUs can perform additional processing and distribution of data to Aircraft Condition Monitoring Systems (ACMS), Aircraft Communications Addressing and Reporting Systems (ACARS), Engine Condition Monitoring Systems (ECM), or to a Quick Access Recorder (QAR) for recording/storage of raw flight data. There are many varieties of FDAU, known by several different acronyms but all perform the same core functions.
2.10.1.13 QAR media is replaced at the end of each day or sometimes after several days have elapsed, dependent on media capacity, data recovery strategy, and analysis program as described in the company operations manual acceptable to the President.
2.10.1.15 FDAP Information: FDAP information can take a range of detection and analysis methods as described below: A. Exceedance or Event Detection. Exceedance or event detection uses the standard FDAP algorithmic methodology that searches the data for deviations from flight manual limits, standard operating procedures, and good airmanship. For example, event detection includes a ‘high’ takeoff rotation rate, stall warning, GPWS warning, flap limit speed exceedance, fast approach, high/low on glideslope, hard landing. Operators must track these details for prompt remedial action to the risks associated.
B. Routine Data Measurements for trend analysis. It is required to download flight data from all aircraft that might produce events. The data collection enables monitoring more subtle trends and tendencies before reaching the trigger levels. The selection of parameters depends on a wide range of aspects of operational variability. For example, measurements include takeoff weight; flap setting; speed and heights; temperature; rotation and takeoff speeds versus scheduled speeds; maximum pitch rate and attitude during rotation; landing gear retraction and extension speeds; heights and times; maximum normal acceleration at touchdown; touchdown distances; maximum braking used. For example, the analysis includes Pitch rates from high versus low takeoff weights; pilot technique during good versus bad weather approaches; touchdown distances on short versus long runways. Operators must perform trend analysis to reverse the adverse trend before reaching the unacceptable risk level.
C. Incident Investigation Data. FDAP is useful for identifying occurrences and other non-compliances flight operations. The data adds proof to the flight crew report, quantifying the reporting statement.
System status and performance can add further clues to cause and effect. For example, occurrences identified using FDAP analysis - vortex wake encounters; all flight control problems; system failures that affect operations; emergencies such as high speed rejected takeoffs; TCAS or GPWS triggered maneuvers. The operator must use the FDR analysis to reconfirm the crew reporting flight information on occurrences.
D. Continuing Airworthiness Investigation Data. Flight Data Analysis Program is one of the tools for occurrence detection and continuing airworthiness management. The engine monitoring program compares the actual engine parameters with the manufacturer's set limits. The comparison is to detect engine defects and predict future performance. The engine manufacturers supply the Engine Condition Monitoring (ECM) programs and feed their databases for performance limits. Operators should consider the potential benefits of including the use of this data within their continued airworthiness program. For example, continued airworthiness management programs using FDAP - engine thrust levels, airframe drag measurement, avionic and other system performance monitoring, flying control performance, brake and landing gear usage, prediction of fatigue damage to structures. Operators must use the data analysis out to monitor the continuing airworthiness. The AIB/GACA database containing all the data from various operators is consolidated for state-level safety analysis.
E. The Information Database. Information gathered should be kept either in a central database or in linked databases that allow cross-reference of different types of safety data. The database should have provisions for air safety and technical fault reporting system to provide a complete view of the operation.
For example, the linked database includes a hard landing that should produce a crew report, an FDAP event, and an airworthiness report. The crew report must provide the context, the quantitative description, and the airworthiness report. The operator must use the events data for speedy and accurate diagnosis of the events using the database.
F. Performance Assessment and Follow-up. As the FDAP system is put in place to detect, validate, and distribute various safety information, all such safety-related data reaches the operational areas where the safety and continued airworthiness benefit is derived. The data must be analyzed using domain knowledge of flight operations and airworthiness. Final validation is carried out at the expertise level to weed out erroneous data. For example, during a routine analysis of go-around performance, it was found that there was a delay of over 30 seconds between flap selection and raising the gear. The reason for the delay has been identified during crew interview and crew training materials have been revised to mitigate the hazard.
G. Remedial Action. Once a hazard or potential hazard is identified, the first step is to decide if the level of risk is acceptable. It is required to reduce or eliminate risk if the risk level is unacceptable. The operators need to assess the implication of the proposed changes.
H. System Outline - Information flow. The simplified flow diagram shown below describes the principal components of a typical FDAP system The FDAP must include methods and procedures for capturing and analyzing the data generated by aircraft. The goal of the FDAP must be to improve aviation safety. The FDAP supports operators to identify, quantify, assess, and address operational risk. Each scheduled commercial air operator must analyze flight data as per the procedures acceptable to the President. Operators may set up a flight data analysis facility or contract the data analysis tasks to any competent organization.
2.10.1.17 Information and area for Specific Types of Events: Reporting of flight operation performance must cover all aircraft in the fleet. Operators must determine a policy on the number of flights sampled for the flight data analysis on a specific type of aircraft. The sampling aircraft should be considered from the main base and nigh-halt stations. Information collected from the FDAP includes reports containing the actual operational parameters. The FDAP measures parameters that include a number of go-around cases below 1000 feet above airfield level (AAL), number of hard GPWS warnings, number of genuine stall warnings, number of TCAS RA, and number of cases landing flaps selected below 500 feet ALL. It is essential to track event locations such as airfield or terminal area or enroute area in case of TA, hard GPWS warning, and flap selection below 500 feet above airfield level. Any abnormal increase of specific events on a particular location must be analyzed to identify any risks related to that location.
2.10.1.19 landing and trigger level: Aircraft operators can utilize FDAP to identify landing occurrences in specific airports or evens in runways. The location-specific analysis will ensure that any adverse trends linking to any runway are readily identifiable. The FDAP is a tool to identify any significant changes or deviations from the acceptable operational limits when event trends are monitored routinely.
Establishing performance limits and tolerances supports the operator’s decision-making process taking any actions. For example, a threshold acceleration value is set for the hard landing. Aircraft landing exceedance beyond the threshold acceleration necessitates maintenance action. Operators must establish trigger levels and maximum rate acceptable to the President. Aircraft operators must initiate appropriate measures to reduce the occurrence of abnormal firm landings, often referred hard landings. The normal sink rate of an aircraft on landing is two to three feet per second; when a pilot lands at seven to eight feet per second, the pilot may judge it harder than normal. The technical definition of a hard landing is a peak recorded vertical acceleration that exceeds 2.1g, or a force more than twice the body weight. The operator must set a safer sink rate limit for landing based on the manufacture’s prescribed limits considering both Airworthiness (structural) and Operational (pilot proficiency) perspectives.
2.10.1.21 The certification criteria of the hard landing threshold are the same for all commercial aircraft. The threshold value is expressed either as a touchdown acceleration due to gravity, the g value of 2.6 or as a touchdown rate of descent exceeding 600 feet per minute (fpm) for certified maximum landing weights.
For aircraft certified to conduct precautionary or emergency landings with mass above the certified maximum landing mass, the hard landing threshold is set to 1.7g or 360 fpm in the overweight condition.
As exceedance of these values will trigger a mandatory hard-landing inspection, most manufacturers publish one or more cautionary thresholds at trigger values that are progressively less than the hard landing limits. Breaching a limiting value will lead to perform a mandatory supplementary inspection that is commensurate with the severity of the exceedance. The operators must set trigger values to prevent any unnecessary additional maintenance.
2.10.1.23 The flight data analyst/flight operations interpreter must understand and identify: a) Contextual elements of occurrence b) Triggering Factors; and c) Hazardous situations Often more clear understanding is gained through crew interviews to get the exact context of the occurrence.
2.10.1.25 Crew Contacting: The crew interview process should be clearly defined in the FDA Program and agreed upon by the flight crew contact person within the operator. The analyst performs crew interviews to increase the understanding of a situation to improve safety. The analyst must prepare well for crew interviews. The analyst must ask clear and straightforward questions during the interview to understand the events. It is essential that the analyst involve more on an event where the degree of identified hazard is severe enough to be targeted most beneficially. The approach is to prevent high-frequency low-risk events or eliminate low-frequency, high-risk events. In assessing the level of risk, the analyst must consider both the direct risks and indirect risks that may be a consequence of those circumstances.
2.10.1.27 Direct and indirect risks: A hard GPWS warning indicates a direct risk whereas, an indirect one would be more false warnings of less risk that may result in pilots becoming too accustomed to hearing such warnings. Thus, reducing the effectiveness of standard recovery from actual warnings.
Reducing the effectiveness of warnings could be catastrophic if not addressed. The operator must have procedures to maintain the efficacy of crew actions. 2.10.1.29 The principal role of a Flight Data Analyst is to check the quality of the processed data and validate events that have exceeded the limits set in the Analysis Specification. Data Analysts will validate events if the exceedance is equal to, or greater than, the threshold value. Therefore, the analyst must correctly set the threshold value for each event and validate the processed data.
2.10.1.31 FDAP is a Flight Operation Performance Measuring System that uses software algorithmics analysis to ease the large-scale implementation of flight-data analysis. The technologies enable airline carriers to analyze flight data to identify safety trends and increase flight reliability. Some advanced analysis tools are advanced enough to look beyond events detections within individual flights but identify systemic problems through statistical analyses of many flights. The advanced tools have three significant focuses that analyze events beyond exceedance-detection:
A. Safety and efficiency B. Providing focused analysis of higher risk phases of flight C. Data Mining for potential precursors of incidents and accidents. 2.10.1.33 Each recorded parameter is matched with the appropriate upper or lower limit known as the threshold value. There are three methods of choosing threshold values. Operators may choose to select any one of three methods setting threshold values for the recorded parameters. For example, if the Flight Crew Operating Manual (FCOM) states that the aircraft’s maximum airspeed is 250 knots below 5000 ft AAL, the event threshold may be set in three different ways. The threshold speed setting depends upon the operator's selection. The selection is based on the criticality of the parameter:
A. Method 1: Trending up to the limit. This method allows the operator for good trending when monitoring aircraft as they approach the limit. The Level 3 threshold is set at the airspeed limit. Accordingly, the level 2 and level 1 trends are set as shown in figure 17.5.1 B. Method 2: Determining the extent of an exceedance This method shows the extent of any exceedance but does not allow for trending towards the airspeed limit. The Level 1 threshold is set at the airspeed limit. This method may not be suitable if the Standard Operating Procedures (SOP) set the lower limit as 240 NM. Refer to figure 17.5.2 C. Method 3: Optimal event thresholds Trending towards exceeding the limit is possible by counting the instances of Level 1 events. The momentary airspeed exceedances are triggered as level 2 events. Setting the Level 3 threshold slightly above the airspeed limit allows a tolerance or ‘buffer’ from the arguably less significant exceedances. The more severe exceedances are captured when the aircraft is flown beyond the limit and the buffer. The Level 2 threshold is set at the airspeed limit, as shown in figure 17.5.3.
2.10.1.35 Maintenance Event Thresholds. Maintenance events may also have three threshold levels. In contrast with the safety events, the thresholds are used to project exceedances and highlight areas of an aircraft operation. Maintenance thresholds are set to indicate maintenance actions required due to an exceedance.
2.10.1.37 There is no buffer required for these events, and a level 1 or 2 thresholds may be considered unnecessary. Many maintenance events have a counterpart safety event if trending towards the limit is required.
2.10.1.39 An example of the difference between the two types of event thresholds would be a flap airspeed exceedance. An operator might use method to set the flap speed safety thresholds as follows: Safety Level 1 = VFE – 5 knots Safety Level 2 = VFE, Safety Level 3 = VFE + 5 knots 2.10.1.41 For the equivalent maintenance event, the level 3 threshold would be set to VFE (the airspeed above which maintenance action is required). Maintenance Level 3 = VFE. As the trending of airspeeds above and below the limit is unlikely to be of value to the maintenance personnel level, 1 and 2 thresholds are not specified. Details of how an operator can change the event thresholds in their Analysis Specification are shown in the Flight Data Services publications.
2.10.1.43 The extent of Flight Data Analysis and the reporting of performance of flight operation depends on the size, age, and fleet size of aircraft. A typical FDAP feedback loop is explained in various steps:
A. Identify areas of operational risk. The FDA Program must be used as part of an operator’s system safety assessment to identify deviations from SOPs or measure current safety margins and detect the operational area of risk. The safety margin will establish a baseline operational measure against which to detect and measure any change. For example, the current rates of rejected takeoffs, hard landings, unstable approaches are considered as the area of operational risk.
B. Identifying and quantifying operational risks. Identifying and highlighting non-standard, unusual, or unsafe circumstances is crucial for the safety monitoring system. In addition to highlighting changes from the baseline, the system should enable the user to determine when non-standard, unusual, or unsafe circumstances occur in operations. For example, the FDAP identifies increases in event rates, new events, new locations.
C. Combine frequency of occurrence and level of severity. FDAP analysis data must be used to assess the risks and to determine which are or may become unacceptable if the discovered trend continues. Information on the frequency of occurrence and the risk level present must be interpreted correctly to determine whether the aircraft or fleet risk level is acceptable. The operator must evolve procedures to reverse the trend before the trend reaches an unacceptable level. If the level of risk becomes unacceptably high, it indicates that the SMS process has failed. For example, a new procedure has been introduced that resulted in a high rate of descent, triggering GPWS warnings. The warning means that the SMS process is not effectively used to manage changes by predicting the risk involved.
D. Risk mitigation. Operators must develop procedures to provide remedial action to mitigate unacceptable risk, either present or predicted by trending. Safety risks might be already present in the system or foreseen to affect the system in the future. Procedures are required to be developed for risk mitigation if the risk level is beyond the unacceptable limit. The safety risk must be controlled, keeping in mind that the risk is not transferred elsewhere in the system. For example, if there are high rates of descent, the Standard Operating Procedures (SOPs) are changed to reverse the adverse trend without transferring the risk to other areas.
E. Confirm the effectiveness of remedial action. If the remedy is implemented, its effectiveness must be monitored and confirmed to have reduced the identified risk and not transferred the hazard elsewhere. For example, ensured that the other measures at the airfield with a high rate of descent do not deteriorate after introducing the new approach procedures.
F.2.10.1.45 FDAP Requirement. 1) GACAR Part 5 Appendix A states that each operator of an airplane of a maximum certificated takeoff mass more than 27000 kg must establish and maintain a flight data analysis program (FDAP) as part of its SMS. The FDAP must include a method of capturing and analyzing the data generated by each applicable aircraft when moving through the air from one point to another. The goal of the FDAP must be to improve aviation safety.
2) An operator may contract the FDAP data analysis activities to another party while retaining the overall responsibility for maintaining such a program. 3) Each FDAP must be non-punitive and contain adequate safeguards to protect the source(s) of the data.
For contracting the FDA Program, the operator must identify a person possessing knowledge on principles and application of FDAP to ensure proper implementation. The operator is ultimately responsible for the FDAP. The Part 121 certificate holders with a small number of aircraft or small mixed fleet are encouraged for pooling FDAP with other similar Operators to obtain a more representative picture. Every month, the occurrences found during flight data analysis must be forwarded to GACA. The format for monthly reporting Operational and Safety Statistics is in Appendix 4A. Scheduled commercial air operators must submit the report every month. Unscheduled commercial air operators must submit the reports quarterly, whereas other operating aircraft fitted with flight data recorders must submit the reports once every six months.
Section 2. Fleet Performance, Engineering Statistics, and Analysisp.356
2.10.2.1 GACAR Part 4 lays down that all certificate holders should prepare a monthly report in respect of fleet performance and engineering statistics (ESR) to determine the reliability of aircraft systems and components and submit at specified intervals as mentioned below:
A. Scheduled Commercial Air Operator – Monthly B. Unscheduled Commercial Air Operator – Quarterly C. Private operators – Every six months D. Other operators approved under Part 91 – Annually (Note: Report applicable items given in Performance Peporting Form of Private Operators) 2.10.2.3 The purpose of the data analysis requirement is to analyze the statistical data to identify the safety trends. The objective includes identifying any deficiency in the basic design in a component or the layout of an aircraft system and the maintenance practices followed by the aircraft operator. The aircraft operator must take the necessary steps to correct the system deficiencies so that the operational reliability of the aircraft is achieved.
2.10.2.5 This chapter describes details of the safety information, method of presentation, and the frequency at which the approval holder is required to submit the fleet performance and Engineering Statistical Report (ESR).
2.10.2.7For uniformity of terminology and standardization of the presentation, formats of the monthly fleet performance report for Scheduled, Unscheduled commercial air operators, and private operators are given in Appendix 4A, 4B, and 4C.
2.10.2.9 The Annual ESR of Part 91 operators must contain at least the following details for each aircraft: A. Aircraft registration, Aircraft owner and operator B. Aircraft and Engine details C. Details of Hours/cycles flown D. Details of Maintenance carried out including details of maintenance person E. Details of Modifications, repairs and AD/SB complied F. Details of Accidents /incidents 2.10.2.11 Contents of the report: The fleet statistics and safety performance report format is divided into three parts. Each part will contain the following minimum data according to the size and type of fleet:
A. Part 1 is general and should contain a brief introduction to the ESR of the operator, distribution list, and glossary of terms/ definitions used in the report as applicable to an individual operator.
B. Part 2 should include the entire fleet registration details for the period under review. C. Part 3 should be consisting of several sections according to the type of aircraft. Each section contains aircraft operating summary for the particular type of aircraft, a summary of mechanical interruptions ATA chapter-wise, cancellation/diversions of flight, details of engine premature removals, engine IFSD, premature removals of APU, a summary of system reliability ATA chapter-wise, an overview of system performance, a summary of unscheduled component removal, details of CVR/FDR removal, the release of aircraft under MEL, Autoland system, and ETOPS/ EDTO reliability.
2.10.2.13 In addition to the numerical data, the operator should provide a Bar Chart/ graph corresponding to each type of aircraft fleet with the following details: A. Average daily utilization of aircraft: Bar charts summarize the daily utilization rate. The graph may be of six months rolling one to compare the average daily utilization rate. For example, the chart for June should give the data from January to June.
B. Aircraft Hours/ Cycles logged: Bar charts should be depicting the hours/cycles logged for each aircraft type and registration mark. C. Engineering defects, Aircraft registration wise. D. Engineering defects ATA system-wise.
E. The system reliability – This will be a linear graph, and there will be individual graphs for each ATA Chapter. 2.10.2.15 In addition to the numerical data, it is encouraged to provide a comparative bar chart in the rolling form. For example, the statistical report for May 2022 should provide data in the form of a bar chart for the month of May and the data for previous months commencing from January 2022 in the same bar chart. This type of comparative chart will support in analyzing the trend at a glance instead of referring to the report of the preceding month.
2.10.2.17 Aircraft operators or operator authorized Repair Stations are required to prepare the Engineering Statistics Report (ESR) at an interval mentioned above and evaluate it for any shortcomings that need immediate corrective action. Repair Stations without affiliated with any operators must include a list of SDRs, major defects, and major repairs in the monthly statement. The organization must submit a copy of ESR and details of internal audits and remedial actions for GACA oversight findings to GACA before the 10th day every month for the proceeding period.
Section 3. Aerodrome Operational and Safety Performance Reportingp.359
2.10.3.1 Each aerodrome operator certificated or authorized by GACA as referred in §4.5 must establish a mandatory reporting system to facilitate the collection of details of occurrences within the organization.
Measuring occurrences in an aerodrome means evaluating safety. Therefore, the aerodrome operators must monitor system performance continuously and identify possible deviations from the set safety standards.
The Aerodromes operators must conduct periodic internal audits to identify any non-compliance or potential deviations to the approved procedures. GACA oversight is also one of the methods of monitoring the process deviation. Aerodrome operators must attend to all the discrepancies identified during internal audits and GACAR oversights. Certificated Public Aerodrome operation must submit a report to the President on Aerodrome operational statistics and safety performance every month, whereas the authorization holders must submit the report annually.
2.10.3.3 Safety is the state in which risks associated with activities in the aerodrome are reduced and controlled to an acceptable level. The aerodrome operators must select safety parameters that are measured easily. Measuring safety parameters supports the aerodrome operators to manage safety risks effectively.
The standardized measuring of safety data supports GACA to analyze state-level safety performance in aerodromes. 2.10.3.5 The aerodrome operators must develop an integrated system of reporting occurrences. The aerodrome operation manual must have a detailed description of personnel reporting of day-to-day occurrences to the assigned person. The operator must establish a system of safety data collection and analysis of occurrence for identifying safety risks and trends in the system. The performance of an individual sub-system may not often guarantee the system’s safety. For example, good maintenance of the runway, upkeeping of lighting and signaling system, and well-trained operation persons contribute to the higher system performance. The integrated system of occurrence analysis is essential to ensure the safety performance of aerodrome operation.
2.10.3.7 Aerodrome Safety indicators must be measurable values. The aerodrome operator must identify measurable performance indicators. The incident reporting system must support the operators for performance evaluation. For example, the measurable events include the number of vehicle accidents, the total number of incidents, violations, or deviations from the aerodrome design standards. The effective measuring strategies include:
A. Developing a proper reporting format, B. Setting appropriate reporting channels, and C. Imparting adequate training to the staff on occurrence reporting. If the safety occurrences are not captured promptly, measuring performance may capture inaccurate outcomes. Any inaccuracies in reporting occurrences often lead to complacency, as if the system performs at a superior state of safety. The Appendix E in GACAR Part 4 has listed typical examples of occurrences in the aerodrome.
2.10.3.9 The aerodrome systems are composed of the interactions between humans, organizations, and technology. The safety component in aerodrome depends on the level of technology adopted, up-gradation of facilities, and infrastructure maintenance in terms of their complexity. Thus, synchronized performance is essential from all significant sub-systems. The aerodrome personnel must know that any erosion of safety standards will have major safety implications. The operator must have an accurate and timely reporting system of occurrences capturing system risks and take quick remedial action to restore the facilities.
2.10.3.11 Aerodrome Operators must collect safety data on occurrences to manage safety risks. Operators emphasizing capturing system deficiencies and taking prompt remedial actions will reduce the risk of adverse outcomes and allow for meaningful safety performance metrics. It is essential to capture safety data reflecting the aerodrome system behaviors to ensure effective SMS implementation. Aerodrome operators must maintain an occurrence database to use it as a decision tool to analyze and control safety risks.
The measurement system safety proposed in this chapter relies on five tiers approach that represents safety performance in terms of the activities of both the GACA and Aerodrome service providers and the data collected from the activities as given below:
A. Safety investigations: The aerodrome safety system is partially based on feedback and lessons from accidents and incidents. This type of reactive data is essential to review and disseminate safety lessons among aerodrome personnel.
B. Occurrence Reporting: The occurrence reporting system facilitates capturing data on actual or potential safety deficiencies in the aerodrome operation. There are two systems available to report occurrences. The mandatory system covers the regulatory obligations of occurrence reporting, whereas the voluntary occurrence reporting system is non-obligatory. The voluntary reporting system facilitates the collecting of occurrences data that are non-mandatory by regulations.
C.GACA oversight and internal audit data: The GACA safety oversight system and the aerodrome operator’s internal audit system use process for performance assurance, entailing inspections and audits continuously, and the results of these audits and inspections represent valuable data available to the operator and GACA. The Aerodrome operators must submit the internal audit reports to GACA for state-level safety analysis.
D. The information generated from other regulators: The ICAO or its Mid-east regional office may conduct inspections and audits in the aerodrome. The results of such audits constitute a vital element of safety information available to GACA.
E. Exchange of safety information: The GACA system of safety information exchange programs with aerodrome operators facilitates sharing of safety data among all operators without identifying the name of the operators.
2.10.3.13 Aerodrome service providers must submit monthly performance reports in a prescribed format referred to in Appendix 5 of this eBook volume. The monthly report must cover occurrences including but not limited to the following occurrence related to the aerodrome operations.
A.A collision or near-collision on the ground between an aircraft and another aircraft, terrain, or obstacle, including vehicles B. Wildlife strike, including bird strike C. Taxiway or runway excursion D. Actual or potential taxiway or runway incursion E. Final Approach and Takeoff Area (FATO) incursion or excursion F. Aircraft or vehicle failure to follow clearance, instruction, or restriction while operating on the movement area of an aerodrome (for example, wrong runway, taxiway, or a restricted part of an aerodrome) G. Foreign objects on the aerodrome movement area which has or could have endangered the aircraft, its occupants, or any other person. FOD may be aircraft parts /components, or broken tire pieces on the runway H. Presence of obstacles on the aerodrome or in the vicinity of the aerodrome which are not published in the AIP (Aeronautical Information Publication) or by NOTAM (Notice to Airmen); the obstacles are not marked or lighted properly I. Pushback, power-back, or taxi interference by vehicle, equipment, or person J. Passengers or unauthorized persons left unsupervised on the apron K. Jet blast, rotor downwash, or propeller blast effect causing injury to the person or displacing any object in the apron areas.
L. Significant failure, malfunction, or defect of aerodrome equipment or system which has or could have endangered the aircraft or its occupants M. Significant deficiencies in aerodrome lighting, marking, or signs N. Failure of the aerodrome emergency alerting system O. Rescue and firefighting services not available as per the stipulated requirements P. Fire, smoke, explosions in aerodrome facilities, vicinities, and equipment which has or could have endangered the aircraft, its occupants, or any other person Q. Aerodrome security-related occurrences (for example, unlawful entry, sabotage, bomb threat) R. Significant changes in aerodrome operating conditions are not disseminated that have or could have endangered the aircraft, its occupants, or any other person.
S. Missing, incorrect, or inadequate de-icing/anti-icing treatment T. Significant spillage during fueling operations due to breakage of refueler or hydrant hose system U. Loading of the contaminated or incorrect type of fuel (misfuelling) or other essential fluids (including oxygen, nitrogen, oil, and potable water) V. Failure to repair poor runway surface conditions or failure to remove runway debris W. Any occurrence in which the human performance has direct influence or could have contributed to an accident or serious incident.
Section 4. Air navigation Operational and Safety Performancep.364
2.10.4.1 Occurrence data collection and establishing an occurrence database support the ANS providers to assess the safety performance quickly. It should be an integrated approach to collecting all occurrence data, including accidents, incidents, and safety hazards. ANS providers are required to share the consolidated data with GACA periodically. The objective of sharing the data with GACA is to monitor the performance of Air Navigation Service Providers at the state level and take corrective measures if any drifts are identified in the industry performance. The safety data collected from the ATM operators provide a better understanding of how the ATM system produces a safe operation. GACA uses the database to build more effective regulatory and oversight systems driven by the database. Therefore, ANS operators must have a system to capture all occurrences details and submit a report in a prescribed format every month.
2.10.4.3 ANS operators must have procedures in the ANS operation manual to notify all significant incidents to GACA. Notwithstanding the notification, investigation, or reporting of significant incidents, the ANS providers must also capture and report all safety hazards and deviations noticed during internal audits and inspections. GACA data analysis system uses the safety data collected from the ANS to assess how the service is delivered and control the risk explicitly using a wider variety of regulatory and oversight systems.
2.10.4.5 The reporting form provides a standardized approach for data collection, monitoring, and performance measurement in the air navigation system. The SSP data analysis Group of GACA will use the reports for industry trend analysis and state-level risk identification. The reporting format provides a standardized data collection method. The GACA data analysis group will analyze the industry data on ANS and identify actual or potential risks in the ANS. ANS policy measures are revised if required based on the outcome of data analysis.
2.10.4.7 ANS providers notify and report significant incidents as soon as practicable as described in GACAR Part 4. Irrespective of reporting accidents and serious incidents to GACA/AIB, ANS operates must submit operational statistics and safety performance reports giving all accidents and incidents details monthly.
2.10.4.9 Among significant occurrences, for example, a level bust is the unauthorized vertical deviation of flights more than 300 feet from an ATC flight clearance. It is a reportable event. In some states, within the airspace of Reduced Vertical Separation Minima (RVSM), the limit is 200 feet. The level bust issue only relates to aircraft in controlled airspace or a designated outside controlled airspace under either radar or procedural ATC control.
2.10.4.11 A defined loss of separation between airborne aircraft occurs whenever specified separation minima in controlled airspace have not adhered. ATS specifies the minimum separation standards for airspace. Aircraft are in safe separation when the horizontal or the vertical separation minima is maintained. A loss of separation occurs when the aircraft deviates both vertical and horizontal limits.
ANS providers need to report this to GACA and require investigation within ANS providers. 2.10.4.13 An air traffic incident often relates to occurrences not affecting the aircraft safety directly; for example, near-collisions of vehicles and pedestrian deviations of set procedures.
2.10.4.15ANS monthly operational and safety report should be prepared and submitted in a prescribed format shown in Appendix 6. The monthly report must consolidate all occurrences shown below: A. An Aircraft collision or a near collision with other aircraft; or obstacle including vehicles, including controlled flight into terrain (near CFIT) B. Separation minima infringement between aircraft or between aircraft and airspace to which separation minimum prescribed C. Inadequate separation, in the absence of prescribed separation minima, a situation in which aircraft perceived to pass too close to each other for pilots to ensure safe separation D. Wildlife including bird strike. Detecting birds’ movements in the landing and takeoff path in the vicinity of the aerodrome E. Actual Taxiway or runway excursion; actual or potential taxiway or runway incursion; Final Approach and Takeoff Area (FATO) incursion F. Aircraft deviation from ATC clearance; Aircraft deviation from applicable air traffic management (ATM) regulation: (i) aircraft deviation from applicable published ATM procedures; (ii) airspace infringement including unauthorized penetration of airspace; (iii) deviation from aircraft ATM-related equipment carriage and operations, as mandated by applicable regulations.
G. Occurrences related to Call sign confusion H. Conducted flight below the ATC authorized SVFR weather minima I. Degradation or total loss of services or functions: 1) Inability to provide ATM services: (a) inability to provide air traffic services or to execute air traffic services functions; (b) inability to provide airspace management services or to execute airspace management functions; (c) inability to provide air traffic flow management and capacity services or to execute air traffic flow management and capacity functions. Operators must inform occurrences of similar types.
2) Deterioration means missing incorrect, corrupted, inadequate, or misleading data. Support services like ATS, ATIS, meteorological services, navigation databases, maps, charts, and AIS may generate erroneous data if set standards have not been adhered to, leading to poor service, including relating to poor runway surface conditions. Occurrence reporting must cover all the data-related events.
3) Failure of communication service; Failure of surveillance service; Failure of data processing and distribution function or service; Failure of navigation service; Failure of ATM system security which had or could have a direct negative impact on the safe provision of service.
4) Significant ATS sector/position overload leading to a potential deterioration in service provision. 5) Incorrect receipt or interpretation of ANS communications, including lack of understanding of the language used when this had or could negatively impact the safe provision of service.
6) Prolonged loss of communication with an aircraft or other ATS units. J. Other Occurrences 1) Declaration of an emergency (Mayday or PAN call). 2) Significant external interference with Air Navigation Services (for example, radio broadcast stations transmitting in the FM band, interfering with ILS (instrument landing system), VOR (VHF Omni Directional Radio Range), and communication.
3) Interference with an aircraft, an ATS unit, or a radio communication transmission by firearms, fireworks, flying kites, laser illumination, high-powered lights lasers, Remotely Piloted Aircraft Systems, model aircraft, or by similar means.
4) Fuel dumping. 5) Bomb threat or hijack. 6) Fatigue degrade or potentially impact the ability of ANS personnel to perform the air navigation duties safely. 7) Any occurrence where the human performance has directly contributed to or could have contributed to an accident or a serious incident.
8) Unauthorized unmanned aircraft operations within the aerodrome control zone or airspace The ANS provider must maintain records for occurrences and report to the President using monthly operational statistics and safety performance reporting.
Section 5. Ground Services Operational and Safety Performance Reportingp.368
2.10.5.1 The initial notification, reporting, and investigation of incidents are necessary for significant safety occurrences. Ground service operators must report GACA all non-serious events using operational statistics and safety performance in a prescribed format every month.
2.10.5.3 Ground services organizations must submit consolidated reports on operational statistics and safety performance reports, including internal audit and GACA surveillance reports, to the President every month.
2.10.5.5 The format for reporting Ground Services Operational statistics and Safety Performance is given in Appendix 7 of this eBook.
Section 1. Types of Safety datap.369
2.11.1.1 The proactive safety information includes the information collected through internal audit reports and remedial action taken report for the GACA periodic Surveillance. All certificated organizations – Air operators, flight training organization, aircraft maintenance organization, aerodromes, and Air Navigation – must submit to GACA, internal audit findings, action completion reports, GACA surveillance findings with discrepancies closure reports as a part of mandatory reporting system referred in §4.7, GACAR Part 4.
2.11.1.3 Predictive safety data include safety information collected from organization internal audits and remedial actions taken for the findings of GACA oversight /Surveillance and Analysis Program (FDAP) reports of the aircraft operator, if applicable as per the provisions of the GACAR Part 121, 135, and Part 125. Predictive safety data includes ATC recorder analysis as well. All organizations must consolidate such data and submit it periodically to the President for Sectoral, Organizational, and State level safety analysis.
2.11.1.5 The reactive safety data are reports of the accident and incident investigations. Irrespective of the reporting of the incidents, organizations are required to submit periodic reports consolidating all incidents/accident investigations including, remedial action reports, to the President for Sectoral, Organizational, and State level safety analysis.
2.11.1.7 All certificated organizations must submit these data periodically.
Section 2. Safety Data Collectionp.370
2.11.2.1 Reporting accidents and serious incidents are essential for the KSA aviation safety system. The details of non-serious occurrences, latent safety hazards, FDAP outputs, and operational deviations are also gathered by GACA using an online reporting system, emails, internal audit reports, and reporting of periodic operation statistics and safety performance. Formats for aviation safety data collection are given in the appendix of this eBook volume. GACA has set up a dedicated data analysis group as a part of KSA SSP and established procedures of data analysis to identify safety issues and trends. The periodic reporting of operational statistics and safety performance includes the following detail:
A. operational data such as total number of flights in a given period, total hours flown by aircraft, number of cycles run by aircraft/engines B. circumstances of accidents and incidents including weather situations, day flight or night operation, flight phase of the incident (takeoff, climb, cruise, descent, landing) C. types of occurrences such as aircraft system failures, operation errors, runway related incidents, wildlife menace or strike, ground equipment failures, error of ground service operator, navigation, communication, radar equipment failure, failures of ILS D. Instrument or visual flight rules or navigation procedural related issues, flight operation procedural, aircraft or equipment maintenance related incidents E. security-related incidents including bomb threats, unruly passengers F. number of accident investigations progressing, incident investigation under progress, airworthiness related issues, or SDR under analysis and study 2.11.2.3 The data collected on the periodic operational statistics and safety information of various sectors provide GACA with a broad understanding of the safety trends. The data analysis covers all sections of aviation standards divisions – flight operation, airworthiness, aerodromes, and air navigation system.
Section 3. Sources of Safety informationp.371
2.11.3.1 The KSA AIB is an independent agency for information processing and investigation of accidents/incidents. The AIB collects, collates, analyzes, and shares safety-related information among stakeholders, including the ICAO. The identified causal factors of accidents and safety recommendations are published on the AIB website for stakeholder view.
2.11.3.3 The second significant source of safety information is the certificate holders – all aircraft operators, aerodrome service providers, ANS service providers, and aircraft maintenance organizations.
All service providers analyze, study, and investigate significant safety issues to identify root causes and take corrective actions. These details are reported to GACA periodically for state-level analysis of safety.
2.11.3.5 The third substantial system for data collection is GACA regulatory oversight system. GACA analyzes the oversight data to identify the areas of risk and initiate measures to correct the deficiencies in the system by introducing suitable measures that include policy updates or strengthening oversight activities.
2.11.3.7 Internal audit reports. Internal audit of the certificated organization – aircraft operators, aerodrome operators, and air navigation service providers. As per the provisions of GACAR Part 5, safety management system, all certificated organizations must conduct internal audits to verify regulatory compliance. The internal audit must focus on continued compliance covering all the areas of operation within 24 months, as acceptable to the President. This eBook provides detailed procedures of various auditing methods. The certificate holders should submit copies of audit reports and remedial actions to GACA soon after conducting audits and completing the rectification action. In any case, operators must submit the audit report to GACA as evidence of complying with regulatory requirements for renewal of the organization certificate.
GACA analyses the audit data collected from various certificate holders to monitor the performance of the respective operators. GACA also uses the audit data from all operators for sectoral analysis to identify any common issues at the state level.
2.11.3.9 GACA Surveillance reports. The provisions of performing surveillance and collecting safety data fall under the purview of implementing the oversight provisions of GACAR Part 5. The oversight system and actions for resolving safety issues described under Part 5 align with ICAO critical elements 7 and 8 implemented as a part of the KSA SSP. GACA conducts oversight of all certificated organizations to verify compliance with the applicable GACAR provisions. The surveillance policy focuses on ‘performance-based surveillance’ approaches to analyze the audit/oversight information and identify corrective measures in the weak areas of the operators or sector as appropriate to data analysis outcomes.
2.11.3.11 Safety Hazards Reports. Substantial safety data is collected by reporting day-to-day safety hazards and discrepancies with procedures. The operator’s SMS internal reporting system must collect such data and initiate timely action to remove hazards. The operator must submit the hazards data to GACA for state-level safety trend monitoring, investigating, and reviewing. The occurrences data collection system is designed to capture safety data as per the provisions of GACAR Part 4. Whereas the investigation requirements of incidents are imperative to the respective organization, GACA may involve or monitor or investigate the safety issue based on the nature of the incident. However, the safety data of all operators are collected by GACA/AIB for risk identification and trend monitoring at the state level.
The information received from the routine reporting of safety hazards is a significant part of the safety database. 2.11.3.13 Flight Data Analysis Report: The regulatory requirements for implementing FDAP are described in section 1, Chapter 7, and Volume 2 of the eBook. Flight Data Analysis Program is often referred to as Flight Data Monitoring (FDM) or Flight Operation Quality Assurance (FOQA). Certificate holders under Part 121 must establish flight data analysis system to enhance flight safety by identifying an airline’s operational safety occurrence risks that are not detectable by normal observations. Flight Data Analysis (FDA) reports received as per the provisions of GACAR 121 form a significant source of safety data within the organization and an essential part of the KSA SSP. The consolidated data of FDAP of all airlines provide GACA and AIB safety data for state-level safety analysis. FDAP is based on the routine analysis of data recorded during revenue flights of an operator. These data are compared against pre-defined envelopes and values to check whether the aircraft flew outside the scope of the standard operating procedures (safety events). The FDA program highlights the safety events in flight operations.
The statistical analysis will assess whether the event is isolated or part of a trend. The certificate holder must take corrective actions if FDAP identifies safety risks involved in flight operations. These reports are collected in periodic reporting by the GACA to analyze the safety trends at the organization and State level.
2.11.3.15 Periodic Reporting of Operational Statistic and Safety information. All organizations must submit a periodic report on Operational Statistics and Safety information. The data collected through the periodic report add to the safety database for analyzing safety trends in the industry – aircraft operation, aerodrome operation, and air navigation, and sector-wise analysis (scheduled operation and non-scheduled operation).
Section 4. Analysis of Aviation Safety Datap.374
2.11.4.1 GACA Part 4 stipulates requirements to all GACA certificated and authorized organizations to submit mandatory reporting of occurrences. The mandatory reporting system requires reporting of accidents, incidents, aircraft defects, failure of facilities or systems in aerodromes and air traffic services, failure of air navigation facility, or use of erroneous procedures that become hazardous to the safe operation of aircraft. Additionally, the mandatory reporting covers reporting of the organization's internal audit findings and actions, GACA surveillance findings, and corrective actions. The voluntary occurrence reporting captures all safety data that are not collected through the mandatory reporting system. The voluntary reporting system, by regulations, protects the reporter's identity, classified data and promotes a conducive reporting environment. The objective of collecting aggregate safety data and analyzing such data is to monitor the state-level safety trends. The Periodic reporting of operational statistics and safety information supports building an aggregate database at the state level. The aggregate data analysis provides insight into the efficacy of the compliance and surveillance system. The data analysis requirements are outlined in KSA SSP.
2.11.4.3 A dedicated multi-disciplinary data analysis team analyzes the aggregate data to identify safety trends. The aggregate data analysis provides GACA and stakeholders an early warning on risk areas. The SSP data analysis group performs data analysis, research and identifies areas of vulnerability. The SSP Safety Analysis and Coordination Group (SSP-SACG), consisting of the subject knowledge specialist technical team, focuses on safety data analysis and coordination. The group should support the SSP-Cross Agency Team to update/ revise the SSP document by identifying the risk and formulating state-level remedial measures.
2.11.4.5 The data analysis group should include managers, experts, and specialists from the aviation industry — analyze data typically using manual methods or information technology tools to discover risk trends, specific events, exceedances, and aberrations at the sectoral level. The analysis of SSP-SACG is to produce results from data analysis report that should reveal otherwise undiscoverable safety insights.
2.11.4.7 GACA and AIB have a memorandum of understanding (MOU) that, in effect, enables exchanges of de-identified safety data, limits disclosure of proprietary information, and offers exclusive access to the GACA. Typical examples of data exchanged for study and analysis include aggregate data from all aviation organizations:
A. Accident and serious incident investigation reports B. Incident category reports C. Directed study of traffic-alert and collision avoidance system (TCAS II) resolution advisories (RAs); D. The capability to compare airline-level TAWS alerts and TCAS RAs with experiences of all other participants — called benchmarking — and the capability to benchmark specific-airline experience against aggregate experience; and, E. Sharing experiences utilizing data showing unsterilized approaches F. Area related events G.g) Airport related events such as bird strikes and wildlife strikes H. Specific types of ground incidents I. ANS facility/equipment failures J. Identifying and analyzing regulatory noncompliance in specific areas 2.11.4.9 For example, the FAA data analysis method is taken as a benchmarking of Airline-level analysis of TAWS alerts that had come up with the issues of inaccurate minimum vectoring altitudes [MVAs] around some airports. It unfolded that several MVAs were not designed appropriately, and a few key elements were not addressed in MVA design, as the FAA analysis reported. Therefore, the FAA reworked those MVAs to make them appropriate for the surrounding terrain.
2.11.4.11 Another example data analysis of FAA identified that all fatal aircraft accidents under Title 14, Code of Federal Regulations Part 91 were due to wake and weather-related turbulence. There were no fatal cases due to wake turbulence on aircraft operating under Part 121.
2.11.4.13 The data analyses will include tools for data visualization, anomaly detection, trend detection, statistical and inference models. The data analysis process involves integrating and analyzing large, complex, and non-homogeneous data collected from pilot and controller text reports. The data analysis process integrates digital flight data, radar, weather, and other sources. The analysis builds a comprehensive understanding of the flight environment and situational context for individual and aggregate flight operations.
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.381
APPENDIX 2A. Occurrence Reporting Form: Flight Operation
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.385
APPENDIX 2B1. Occurrence Reporting Form: Airworthiness
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.387
APPENDIX 2B2. Service Difficulties Reporting Form
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.390
APPENDIX 2B3. Defect Reporting (Repair Shop/line station) Note: This form is for Repair Shop use only
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.391
APPENDIX 2C. Occurrence Reporting Form: ANS
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.395
APPENDIX 2D. Occurrence Reporting Form: Aerodrome
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.399
APPENDIX 2E. Occurrence Reporting Form: Ground Services
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.402
APPENDIX 2G. Occurrence Reporting Form: Dangerous Goods
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.404
APPENDIX 3. Performance Reporting Form: Flight Operation
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.406
APPENDIX 4A. Performance Reporting Form: Airworthiness (Scheduled Operators)
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.419
APPENDIX 4B. Performance Reporting Form: Airworthiness (NSOP)
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.427
APPENDIX 4C. Performance Reporting Form: Airworthiness (Private operators)
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.433
APPENDIX 5. Aerodrome Operational and Safety Statistics
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.434
APPENDIX 6. ANS Operational and Safety Statistics
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.435
APPENDIX 7. Ground Services Operational and Safety Statistics
CHAPTER 11. COLLECTING AND ANALYZING AVIATION SAFETY DATAp.436
APPENDIX 8. PIB Report Synopsis:
1. FACTUAL INFORMATIONp.436
1.1 History of the Flight 1.2 Injuries to persons 1.3 Damage to aircraft 1.4 Other damage 1.5 Personnel information 1.6 Aircraft information 1.7 Meteorological information 1.8 Aids to Navigation 1.9 Communication 1.10 Aerodrome information 1.11 Flight recorders 1.12 Wreckage and impact information 1.13 Medical and pathological information 1.14 Fire 1.15 Survival aspects 1.16 Tests and Research 1.17 Additional information 1.18 New investigation techniques
3. CONCLUSIONSp.436
3.1 Findings: 3.2 Causes:
4 SAFETY RECOMMENDATIONSp.436
Note: Mention not applicable when the items are not relevant to the investigation.