When I first flew with any kind of inertial navigation system at all, the manual was over 300 pages long and nobody understood it. (A story about that first flight: "You Gotcher Handsfull, buddy!".) These days the inertials hardly merit more than a paragraph or two, but having a basic understanding can help you when the GPS goes away.
Photo: Wild Bill Kelso navigating, from "1941" (John Belushi).
- A General Overview — the basic forms of navigation explained and a few definitions.
- Principles of Inertial Navigation — the theory is pretty simple: you start with a known position, detect movement, and from that you derive an end position. But the earth isn't flat and it is spinning in space. Things can get complicated. (As pilots we don't need to know this, but it might help you understand just how complicated things are.
- Mechanics of Inertial Navigation Systems — a basic inertial navigation system involves accelerometers and gyros; probably includes mechanical gimbals or non-mechanical lasers. Modern systems (mechanical and laser) are very reliable. Accuracy can be improved with a hybrid solution that involves GPS.
- Honeywell Laseref VI — looking at an actual system may help solidify some of this theory. We'll look at the Honeywell Laseref VI, which is used in several models of Gulfstream, Bombardier, and Dassault aircraft.
A General Overview
The five basic forms of navigation
- Pilotage, which essentially relies on recognizing landmarks to know where you are. It is older than human kind.
- Dead reckoning, which relies on knowing where you started from plus some form of heading information and some estimate of speed.
- Celestial navigation, using time and the angles between local vertical and known celestial objects (e.g., sun, moon, or stars).
- Radio navigation, which relies on radio‐frequency sources with known locations (including GNSS satellites, LORAN‐C, Omega, Tacan, US Army Position Location and Reporting System...)
- Inertial navigation, which relies on knowing your initial position, velocity, and attitude and thereafter measuring your attitude rates and accelerations. The operation of inertial navigation systems (INS) depends upon Newton’s laws of classical mechanics. It is the only form of navigation that does not rely on external references.
These forms of navigation can be used in combination as well. The subject of our seminar is the fifth form of navigation – inertial navigation.
- Inertia is the property of bodies to maintain constant translational and rotational velocity, unless disturbed by forces or torques, respectively (Newton’s first law of motion).
- An inertial reference frame is a coordinate frame in which Newton’s laws of motion are valid. Inertial reference frames are neither rotating nor accelerating.
- Inertial sensors measure rotation rate and acceleration, both of which are vector‐ valued variables.
- Gyroscopes are sensors for measuring rotation: rate gyroscopes measure rotation rate, and integrating gyroscopes (also called whole‐angle gyroscopes) measure rotation angle.
- Accelerometers are sensors for measuring acceleration. However, accelerometers cannot measure gravitational acceleration. That is, an accelerometer in free fall (or in orbit) has no detectable input.
- The input axis of an inertial sensor defines which vector component it measures. Multi‐axis sensors measure more than one component.
- An inertial measurement unit (IMU) or inertial reference unit (IRU) contains a cluster of sensors: accelerometers (three or more, but usually three) and gyroscopes (three or more, but usually three). These sensors are rigidly mounted to a common base to maintain the same relative orientation.
If you would like to know more about inertia, Newton's Laws of Motion, and the science of turning these things into positions, speeds, and directions, see: Mechanics, which deals with much more than just aerodynamics.
Principles of Inertial Navigation
Figure: Coordinate frames, from Stoval, figure 1.
[Stoval, pg. 3.]
- To begin our discussion, we reduce the problem to its simplest terms by assuming a flat earth and confining our motion to the surface. To indicate position we must establish a grid or coordinate system. We label one direction on the surface as N and another, perpendicular to it, as E. If the third direction is perpendicular to the surface in the down direction, a north-east-down (NED), right-handed coordinate frame is established.
- For the navigating system (platform), we label the longitudinal axis as x and the cross axis as y (Figure 1). The z axis is down in order to complete a right-handed coordinate system. This coordinate frame (xyz) moves with the vehicle and is fixed in the body. In general, this frame is rotated about the vertical with respect to the NED frame by some azimuth angle, ψ.
- Assume that, initially, the x axis is oriented north and the y axis is oriented east. (We will consider alignment later.) To track our motion with respect to the grid, we mount an accelerometer on the x axis and another on the y axis. (This is a strapdown configuration in which the inertial sensors are fixed with respect to the body of the navigating system.) If we assume that the NED frame is an inertial frame, the accelerometers will measure acceleration with respect to that frame, but the output will be in the xyz frame. Therefore, we will need to know the azimuth angle to navigate in the NED frame. We can sense the change in the azimuth angle with a gyro mounted on the z axis. The output of this gyro is ωz = dψ/dt. Since ψ is initially zero, the integral of ωz gives us ψ as a function of time. For now we assume perfect sensors. Once we have ψ, we can rotate the accelerometer outputs to the NED frame.
Of course we navigate in three dimensions but we'll get to that in a minute.
When considering how an inertial navigation system works, it is necessary for an engineer to get into the trigonometry. For us pilots, however, we need only understand that accelerometers (discussed below) measure the forces and computers take care of the math. If have two devices designed to measure acceleration, mounted at 90° angles to each other, we can track our movement in two dimensions.
Figure: Platform orientation, from Stoval, figure 3.
[Stoval, pg. 3.]
- Now, we return to our consideration of three-dimensional motion over a flat earth. The platform attitude can no longer be specified by just the azimuth angle. We will specify the orientation of the platform by azimuth, pitch, and roll, as shown in Figure 3. In this document, platform refers to the body axes of the system being aligned. These axes are the same as the xyz coordinate frame previously defined.
By adding a third accelerometer, 90° to the previous two, we will have a way of tracking movement in three dimensions. But we, as pilots, don't actually operate in three dimensions, by the truest sense of the term. Our horizontal plane is aligned with the earth, which isn't flat.
Figure: Spherical earth, from Stoval, figure 6.
[Stoval, pg. 3.]
- We now come to our spherical earth model. We will assume, for the moment, that the earth is not rotating. We can see that our NED coordinate system is no longer appropriate for indicating position with respect to the earth’s surface. We now describe our platform’s position in terms of latitude, longitude, and altitude (Φ,λ,h) above the earth’s surface (Figure 6). We still use the NED frame, but now it represents a frame that is tangent to the surface of the earth at the platform’s present position. This frame is referred to as a locally level frame. Note that the NED frame now moves about the surface of the earth along with the platform. We also define a coordinate frame, which is fixed with its origin at the center of the earth, as the z axis through the North Pole, the x axis through the Greenwich Meridian, and the y axis to complete a right-handed coordinate frame. This is the earth-centered, earth-fixed (ECEF) frame. The transformation from the ECEF frame to the NED frame is defined by the latitude and longitude. The attitude of the platform is described with respect to the NED frame exactly as before.
Our inertial navigation must consider "level" to be parallel to the earth's surface or perpendicular to gravity's pull. The engineer calls this the "locally level frame" but for us pilots it is simply level flight. There is still one more problem to overcome: the earth is rotating. We need to compensate for that.
Figure: Rotating spherical earth, from Stoval, figure 8.
[Stoval, pg. 3.]
- The next step in our analysis is to allow our spherical earth to rotate. We define a new coordinate frame called the inertial frame, which is fixed at the center of the earth. Ignoring the earth’s orbital motion, we regard the orientation of this frame as fixed with respect to the distant stars. The inertial frame is defined to be coincident with the ECEF frame at zero time. Figure 8 shows the relationship between the inertial and ECEF frames. The ECEF frame rotates with respect to the inertial frame with an angular velocity (Ω) of approximately 15.04 degrees per hour.
Inertial Navigation System Mechanics
Figure: Accelerometer, from Stoval, figure 3.
Before considering gyroscopes and other items of inertial magic, let us consider the accelerometer.
[Stoval, pg. 5.] No matter how an accelerometer is constructed, we may think of it as shown in Figure 2. The accelerometer consists of a proof mass, m, suspended from a case by a pair of springs. The arrow indicates the input axis. An acceleration along this axis will cause the proof mass to be displaced from its equilibrium position. This displacement will be proportional to the acceleration. The amount of displacement from the equilibrium position is sensed by a pick-off and scaled to provide an indication of acceleration along this axis. The equilibrium position of the proof mass is calibrated for zero acceleration. An acceleration in the plus direction will cause the proof mass to move downward with respect to the case. This downward movement indicates positive acceleration. Now imagine that the accelerometer is sitting on a bench in a gravitational field. We see that the proof mass is again displaced downward with respect to the case, which indicates positive acceleration. However, the gravitational acceleration is downward. Therefore, the output of an accelerometer due to a gravitational field is the negative of the field acceleration. The output of an accelerometer is called the specific force and is given by
f = a - g
f = specific force
a = acceleration with respect to the inertial frame
g = gravitational acceleration.
This relation is the cause of much confusion. The easy way to remember this relation is to think of one of two cases. If the accelerometer is sitting on a bench, it is at rest so a is zero. The force on the accelerometer is the normal force of reaction of the bench on the case or negative g. Or imagine-dropping the accelerometer in a vacuum. In this case f reads zero and the actual acceleration is a = g. To navigate with respect to the inertial frame, we need a, which is why, in the navigation equations, we convert the output of the accelerometers from f to a by adding g.
Of course this is oversimplified, our accelerometers measure the deflection of the "proof mass" with something a bit more reliable than a spring. But the point is that we can measure acceleration in any axis.
[King] Consider an accelerometer as an instrument that measures acceleration along a single axis. Integrate the output once, and you have velocity. Integrate again, and you have position - or rather, change of position - along the accelerometer's axis. If you know the direction of travel, you can deduce current position. Inertial Navigation is sim ply a form of `dead reckoning'. You need to know the starting point - an inertial navigation device/ system (I.N.) can't find its initial position on the earth (it can find latitude, with difficulty, but not longitude).
That isn't as complicated as it might sound. Acceleration is a unit of distance divided by time twice, feet/sec2, for example. You can think of the process of integration as multiplying the acceleration by time. If you multiply feet/sec2 by seconds, you get feet/sec, which is nothing more than velocity. Do it again and you get feet.
Figure: Gyroscope operation, from Wikimedia.
The word gyroscope comes from the Greek gyros, which means circle. In a conventional gyroscope, the gyro is a spinning wheel mounted on two bearings inside an inner ring which itself is mounted inside another ring. When rotating, the spinning wheel maintains its position according the the conservation of angular momentum.
These rings are known as gimbals. Note that the outer gimbal can be moving in various directions and the inner spinning wheel maintains its orientation in space. That becomes very useful when the outer gimbal is mounted to a moving vehicle, such as an airplane. The airplane's orientation in space can change while the inner spinning wheel's orientation remains constant.
The axle of the spinning wheel determines is axis. The ends of the axle resist any external forces exerted by the outer gimbals. measuring these forces with accelerometers provide the basis of inertial navigation.
Figure: Gimballed inertial platform, from King, figure 1.
[King] Take three accelerometers, with their sensing axes orthogonal. Arrange them so that their axes are aligned northsouth, eastwest, and vertical. To maintain this orientation when the vehicle maneuvers, the accelerometers are suspended in a set of three gimbals that are gyrostabilized to maintain the direction.
A basic inertial navigation system takes three sets of gyroscopes to keep an inertial platform in a fixed orientation in space and three accelerometers to measure forces on the platform to derive acceleration in three axes. Determining the forces in each direction over time allows the system to determine velocity in all three directions. By multiplying the velocity by time and relative position can be determined. If the starting position was known, the relative position can be added to determine an ending position.
Errors: The Flat Earth Society
Figure: Schuler pendulum, from King, figure 3.
Of course all this theory isn't as simple as that. There are errors involved.
[King] The earth is not flat. As we move, close to the sur face, we need to keep tilting the platform (with respect to inertial space) to keep the axes of the N and E accelerometers horizontal. To do this, we can use the gyro torquers, and feed them with a signal proportional to the N and E velocity.
Since we reference everything on earth to the surface of the earth, the inertial platform must be moved to agree with the "flat earth" concept. If we didn't do this, for example, an airplane flying from Hawaii to Switzerland would show itself inverted at the end of the journey.
[King] Assume one of the gyros has a 'drift' error of 0.01 deg/hour. This, via the servo, causes a tilt to build up, which oscillates at the Schuler frequency, again causing an oscillatory acceleration signal error, and hence an oscillatory velocity error.
Of course the given figure here is hypothetical; it will be different for various gyros. The effect was first understood to be a factor of friction of the bearings. But even laser ring gyros are known to drift. The various gyros will compound the drift and can throw off the inertial platform. Total drift of 3 to 7 miles per hour is not unusual.
[King] This causes a different behaviour to that caused by drift in the `horizontal' gyros. The gimbals rotate slowly about the vertical axis, thus the `East' gyro begins to sense a small component of the earth's rotation. This in turn causes the `East' servo to tilt the platform towards the North, causing an erroneous `North' acceleration signal. The resulting North position error in this case is a Schuler oscillation, superimposed on a time-squared function. For short durations, this effect is less sensitive than that of `horizontal' gyro drift. It requires about 0.2 deg/hour of azimuth gyro drift to produce a position error of 1 nautical mile after one hour - that is, a factor of 20 less sensitive.
[King] Gimballed [inertial navigation systems] can be very reliable, accurate, and good value for money. However, the gimbal arrangement is mechanically very complex. It contains delicate sliprings; the motors dissipate power, thus the instruments see a varying thermal environment as the gimbals move about; mechanical resonances are unavoidable. They can also be expensive to maintain. So from the early 1970s, the [inertial navigation] industry started contemplating an alternative, simpler arrangement. Why not get rid of the gimbals altogether - just `strap down' the gyros and accelerometers onto the mounting frame?
Getting rid of the gimbals certainly reduces the number of moving parts and that increases reliability.
[King] In a gimballed system, gyros need to measure down to a few thousandths of a degree per hour (extremely difficult, but after 20 years' development, fully achievable). However, they had only to measure rotations up to a few tens of degrees per hour - a dynamic range of about 105. In a strapdown system, the same drift accuracy is needed, but it is also necessary to measure rotations within the full maneuver envelope of the aircraft - up to several hundreds of degrees per second. The dynamic range is thus four orders of magnitude greater.
The problem is that now the sensors used to detect forces on the gyros now have to compensate for the movement of the vehicle itself. But this isn't a problem with modern computers and so the strapdown inertial is pretty much the standard solution today.
Ring Laser Gyros (RLGs)
In the early days of inertial navigation systems, it was thought that if we could eliminate the spinning-wheel gyros, we could greatly increase the reliability of the systems. There in started the quest for ring laser gyros.
[King] Interestingly enough, the `reliability' advantage of RLGs has turned out to be a fallacy. Good spinning-wheel gyros today have mean time between failures (MTBFs) - in an aircraft environment - of tens of thousands of hours, and virtually no life limiting wearout mechanisms. RLGs are not demonstrably better in either of these respects. In fact, it tends to be the reliability of the associated electronics that dominates an I.N. system's MTBF. A modern strapdown RLG I.N. has an MTBF of 5000 - 10000 hours.
Figure: Ring laser gyro schematic, from King, figure 8.
[King] The RLG body is a solid glass block, with three narrow tubes drilled in it. A mirror is placed at each corner, forming a triangular optical resonator path. The tubes are filled with a helium-neon mixture at low pressure. A high voltage (around 1kV) is applied between the cathode and the two anodes, causing a discharge (simply an expensive neon lamp). The discharge provides enough energy to cause regenerative lasing action in the gas, with light beams circulating around the triangular resonator path. In fact, there are two lasers within the same cavity - one with a clockwise (CW) beam, the other counterclockwise (CCW). When the gyro is at rest, the two beams have the same frequency (typically with a wavelength of 633 nanometres).
A laser beam is simply light created between the cathode and anode using high voltage and the gas mixture. The beam is split into two and is sent in opposite directions using the mirrors to turn the corners. The light particles are called photons.
Figure: Ring laser gyro photograph, from King, figure 9.
[King] Now consider the block rotating in a CW direction. A photon in the CW beam, starting at the bottom left-hand mirror finds, after one traverse of the cavity, that the mirror has moved slightly further away. Thus it sees a slightly increased path length. Similarly, a photon in the CCW beam finds a shorter path length. The difference in path lengths causes a small difference in frequency. By making one of the mirrors partially transmitting, samples of both beams can be extracted and the frequency difference measured. This is precisely proportional to the applied rotation rate.
A computer simply measures the varying frequencies along each path to come up with relative motion of the system.
[King] A complication arises at very low rotation rates. The mirrors are not perfect and produce miniscule amounts of backscatter, which couples energy between the two beams. This coupling of energy between two very high-Q oscillators can cause the frequencies to lock together.
At very low rotation rates the beams can synchronize and lock together, dropping the frequency to zero. This can be countered by giving the beams different paths or by mounting the gyro on a piezo-electric dither motor that rapidly vibrates to decouple the light waves. (Dither means tremble or vibrate.) The dither motor is the most often used solution and comprises the ring laser gyro's only moving part.
[King] A new feature is that many [inertial navigation systems] today contain an embedded GPS (Global Positioning System) receiver module. GPS and I.N. are ideal synergistic partners, as their error dynamics are totally different and uncorrelated. The following are the main advantages:
- The integration with GPS solves the problem of `calibrating' the instrument errors in a strapdown system. In a gimballed system, the gimbals can be moved into different positions without removing it from the aircraft, thus allowing the earth's rotation and gravitational field to calibrate each of the gyros and accelerometers. This cannot be done with a strapdown system.
- Similarly, the GPS provides a means of `inflight alignment', removing the need for the aircraft to be held stationary for up to 5 minutes while the I.N. `gyrocompasses', prior to flight.
- The I.N. provides a seamless fillin for GPS `outages' resulting from jamming, obscuration caused by maneuvering, etc.
- The I.N. provides a means of smoothing the noisy velocity outputs from the GPS, and a continuous highbandwidth measurement of position and velocity.
- In a tightlyintegrated system, the I.N. provides a means for narrowing the bandwidth of the GPS tracking loops, providing greater immunity to jamming.
Honeywell Laseref VI
[Honeywell Laseref VI Product Description, §1.0] The Laseref VI Micro Inertial Reference System contains the following components: HG2100BB Micro IRU, WG2001 Mounting Tray, IM-950 Aircraft Personality Module.
The HG2100 Micro IRU is a self-contained Inertial Reference Unit that provides long range navigation using high accuracy inertial sensors. Industry standard ARINC-429 outputs are provided for Flight Management Systems, Primary Displays, Forward Looking IR Cameras, Head-Up Displays, Flight Control, antenna stabilization (Satcom, Weather Radar, Direct Broadcast Satellite), EGPWS, and other critical aircraft systems. Full inertial reference performance is provided for unaided RNP-10 and RNP-5 (time limited) without GPS inputs. When GPS inputs are applied, the IRU provides tightly coupled GPS/Inertial hybrid outputs, initializes automatically, and performs alignment-in-motion.
ARINC 429 Outputs:
[Honeywell Laseref VI Product Description, §2.0] The Inertial Reference (IR) component of the Micro IRS contains three force rebalance accelerometers and three laser gyros, which it uses to measure inertial motion. The IR component requires system initialization (entry of latitude and longitude). Initialization may come from another system such as a Flight Management System (FMS) or from position inputs provided by a GPS receiver. Once the IR component is properly aligned and initialized it transitions into its normal operating mode. It relies on inputs from an Air Data System (ADS) for wind, flight path and altitude. The inertial reference system outputs the parameters below.
- Longitudinal, Lateral, and Normal Accelerations
- Pitch, Roll, and Yaw Rates Local Level Frame:
- Pitch and Roll Angles
- Pitch and Roll Attitude Rates√Flight Path Angle and Flight Path Acceleration
- Inertial Vertical Speed and Inertial Vertical Acceleration
- Platform Heading
- Turn Rate
- Latitude and Longitude
- N-S Velocity, E-W Velocity, and Ground speed
- Inertial Altitude
- True and Magnetic Heading
- Track Angle True and Track Angle Magnetic
- Track Angle Rate
- Wind Speed and Wind Direction True
- Drift Angle
- Along Track and Cross Track Accelerations
- Along Heading and Cross Heading Accelerations
[Honeywell Laseref VI Product Description, §2.0] The GPS Hybrid function utilizes existing hardware components in the IRU to receive GPS data from one or two GPS Receiver systems. Data received is one Hz nominal RS- 422 time mark signal unique for each GPS receiver input and ARINC 429 GPS high- speed satellite measurement and autonomous data. The GPS Hybrid function blends received GPS autonomous Pseudo Range with Inertial and Air Data altitude data in a tightly coupled Kalman filter to achieve optimal position, velocity, and attitude performance. All satellites and sensors are individually calibrated in the Kalman filter. The resulting hybrid data is highly calibrated and provides exceptional navigation performance even if all satellites are lost. The GPS Hybrid function provides the following output parameters:
- Hybrid Latitude and Longitude
- Hybrid N-S Velocity, E-W Velocity, and Ground Speed
- Hybrid Altitude and Vertical Velocity
- Hybrid True Heading, Track Angle, and Flight Path Angle
- Hybrid Horizontal and Vertical Figure Of Merit and Integrity Data
HIGH Integrity Hybrid GPS (HIGH Step II)
[Honeywell Laseref VI Product Description, §2.0] HIGH Step II is an enhanced version of HIGH that further improves the capability of the GPS/Inertial technology. HIGH Step II meets the industry requirements (DO-229 appendix R) for GPS/inertial tightly coupled integrity calculations. It features a Honeywell algorithm called Solution Separation that uses optimal multiple 36-state Kalman filtering techniques to produce a RAIM like function and also extends the integrity protection levels (i.e. integrity coasting) by taking advantage of the inertial integration which extends the function to GPS denied environments (i.e. Terrain Masking, Solar storms, Intentional Jamming, GPS constellation variation etc). This makes RNP navigation, especially for low RNP, more robust to protect against unexpected GPS denied environments leading to missed approaches.
If GPS data is completely lost, the kalman filter will maintain accuracy for an extended period of time. The table below shows the 95% coasting performance.
|Summary of 95% Accuracy Hybrid Coasting Times|
|Coasting Times for Given RNP|
|Performance Level||RNP 0.1||RNP 0.3||RNP 1|
|95% Accuracy||8 minutes||20 minutes||2 hours|
[Honeywell Laseref VI Product Description, §2.0] The IRU provides three alignment modes consisting of:
- Stationary Alignment
- Align In Motion
- Auto Realign
Stationary Alignment and Align In Motion modes are performed in conjunction with the Attitude mode prior to entry into the Navigation mode so that valid attitude outputs are available immediately after power-up. The Auto Realign mode is performed in conjunction with the Navigation mode. The IRU continuously tests for the Align In Motion conditions, and if met, preempts the Stationary Alignment mode and switches to the Align In Motion mode. Following completion of either alignment mode, the IRU transitions to the Navigation mode. Once the Navigation mode is attained, the IRU remains in this mode indefinitely while valid power is applied to the device (or until the IRU is reset using either the IRU Off discretes or the IRS Reset Command). While motionless in the Navigation mode, the IRU automatically realigns itself using the Auto Realign function.
During Stationary Alignment and Post Flight Auto Realign, valid data from GPS may be used as an automatic source for position entry. Also, valid GPS data must be received in order for Align In Motion to operate. To be considered valid for use during Stationary Alignment, Align In Motion, and Post Flight Auto Realign, GPS data shall be ARINC- 743A or ARINC-755 format.
You need to unlearn some of what you learned operating conventional inertial navigation systems when you upgrade to a hybrid system. The IRS becomes one of your most accurate navigation systems on the airplane and might be selected above DME/DME or VOR/DME systems should the GPS fail. What follows comes directly from a a Honeywell newsletter,New FMS Logic.
As Performance-Based-Navigation (PBN) becomes more prevalent throughout worldwide airspace, Honeywell has modified the logic of the EPIC and FMZ 2000 FMS systems to ensure that the best sensor is being used for navigation. Sensor accuracy and integrity is generally measured by the Estimated Position of Uncertainty (EPU). An EPU is assigned for each sensor that is available to the FMS, and is calculated using various means.
Prior to FMS 7.1 ( EPIC) and 6.1 (FMZ 2000), the FMS uses a linear hierarchy to determine which sensor should be used for navigation. If the EPU value of any sensor is deemed unacceptable by the FMS, then it will ‘fail down’ to the next available sensor. The fail down logic in older software versions uses the following hierarchy:
- IRS (if installed)
- Dead Reckoning
With software versions 7.1 (EPIC) and 6.1 (FMZ 2000), the FMS chooses the best available sensor based on the lowest EPU. “Best” sensor means that, if the current sensor in use is at least less than 5% worse than any other sensor, the FMS will continue to use it as long as it meets all validity checks. In other words, for the FMS to use another sensor, the new sensor has to be performing at least 5% better than the sensor currently in use. The exception is when DME/ DME or VOR/DME is being considered. In this case, a radio source must have an EPU least 40% better than a GPS or IRS EPU to be considered for navigation.
The incorporation of this new logic also means that crews are more likely to encounter IRS Navigation mode because the newer Hybrid Inertial Reference Units are constantly being updated by the GPS, and generally have a lower EPU than DME/DME. Additionally, the inclusion of SBAS will yield an even lower EPU for the IRS. Subsequently, if the GPS EPU rises to a value that is 5% worse than the IRS EPU, then IRS will be used by the FMS. For example, if the aircraft is using GPS with SBAS (GPS-D mode) with an EPU of 0.01, then the IRS should also have a similar EPU because its position is constantly being updated with the D-GPS position. If, however, the SBAS signal is lost, the GPS sensor would immediately increase to a higher value (e.g., .11). It is then possible that the system would revert to IRS for a short period because the IRS EPU would be better than the normal GPS EPU. As a result of IRS coasting, the IRS EPU would then grow to match the GPS EPU; then the FMS would resume using normal GPS. If normal GPS is then lost, the FMS will continue to coast in IRS Navigation mode until it is 40% better than the IRS EPU.
Whenever the EPU value exceeds the RNP value for a given phase of flight, various CAS messages or FMS messages will be annunciated. Additional CAS messages may also be annunciated if the EPU reaches an unacceptable level to conduct an RNAV approach, regardless of phase of flight. For example, when operating in terminal airspace, the EPU would have to exceed 1.0 in order to receive an “UNABLE RNP” Message. Additionally, if the EPU reaches a value above 0.5, other CAS Messages are annunciated on certain EPIC platforms, alerting the crew that an RNAV approach may not be possible. It is important to note that automatic reversion to IRS due to GPS issues will not generate a CAS message until the IRS EPU reaches a level above the current or next leg RNP value. Flight crews should review their OEM guidance to determine applicable CAS messages and associated flight crew procedures.￼￼
Basic Principles of Inertial Navigation, Seminar on inertial navigation systems, Tampere University of Technology, undated.
Honeywell Direct-To Newsletter, New FMS Sensor Logic, December 2012.
Honeywell Laseref VI, Micro Inertial Reference System, Product Description, March 2012.
King, A. D., B.Sc, F.R.I.N., Inertial Navigation - Forty Years of Evolution, General Electric Company Review, Vol 13, No 3, 1998.
Stoval, Sherryl H., Basic Inertial Navigation, Naval Air Warfare Center Weapons Division, September 1997.
Wikimedia Commons, Public Domain Artwork