11: Locating Cable Faults

Written by Bruce Robertson, 97. Reviewed Dec 2005. © Kingfisher International

prev add to favourites print next Introduction

Locating fiber cable problems can be a real challenge for a technician! Repair solutions will depend on:

You would be very well advised to spend some time experimenting with fault finding techniques for your application. This will avoid having to experiment on a live system, which may cause further damage to both the system, and your reputation.

The biggest problems with fault finding are

Some fault location techniques are:

Visible fault location

This technique was pioneered by the use of Helium-neon lasers producing red light at 628 nm. This worked well, however the lasers used often had a short life and were very bulky. These were gradually replaced by much more convenient solid state lasers, first at 670 nm, then 650 nm, and now commonly at 635 nm.

The gradually evolving wavelength of the solid state lasers is important, because of the nature of the human eye, which responds much better to 635 nm light than 670 nm light. The actual power level is limited by safety considerations to below +7 dBm, so the lack of response at 670 nm can not be compensated by boosting the power available. At the same power level 635 nm devices appear 8 times (9dB) brighter than the older 670 nm types.

Companies trying to sell the older 670 and 650 nm lasers emphasize that they are visible further along the link than the newer types, since they are attenuated less rapidly. However they are rarely, if ever, used in this manner, so the argument is spurious.

At Kingfisher we have tried experimenting with green lasers, but these didn’t prove very useful for various reasons to do with the fiber and cable.

It seems that 650 / 635 nm will remain the optimal wavelengths for this application, regardless if future advances in lasers.

It must be emphasized that use of visible fault locators is dependent on many conditions, and attempts to define them as having a particular distance range are fairly pointless. However, the maximum possible distance over which some light can be seen emerging from a cleaved end, is about 10 Km for 670 nm, and rather less for 635 nm. This is only useful if you are actually looking at ends!

A common use of visible fault locators is to locate a problem or break in a patch box or cables within an exchange. The break shows as a bright red light shining through the side of the sheath. Of course the ability to do this depends on the light being able to get through the sheath. Many 3 mm patch lead cables readily allow the light through, however some colors ( particularly purple and black ) seem to be opaque to red light, and may not show anything.

It is better to verify expected performance with a visible fault locator before proceeding. It is even better to take this into account when specifying the cables in the first place.

A common use of visible fault locators in the LAN environments is to check continuity and duplex connector polarity.

Another useful function is the ability to see if light can get to a particular point on a link. To do this, put a sharp bend into the fibre, and visible light may leak out of the side of the sheath. It map be appropriate to shield as much ambient light as possible while doing this: maybe cover yourself with a ground sheet.

Visible fault locators are also extremely handy for finding problems with installed splitters and active devices. Without this technique, there is often very little alternative to dismantling a coupler assembly to find a suspected problem. Using visible fault location, it is often possible to find a fault with minimal disturbance.

Visible fault locators can also be used to rescue patch leads that have one faulty connector. The faulty connector will often glow brightly when light is injected into it.

Clip On Identifiers

A clip on identifier is clamped onto a patch lead to determine if there is a tone present, or traffic or nothing. This requires access to the fibers or patch cables, and a bit of slack to allow some bending.
Readings may be adversely affected by colored plastic coatings absorbing the light.
Identifiers should be tested for the amount of increased loss they create, since this can drop out live systems. They tend not to give totally reliable results, and are often affected by stray light.
For these reasons, they should be used only to verify the link status before disconnection. They are preferably not used indiscriminately to locate a one of many possible active fibers. They are, however, a lot better than nothing!

OTDRs for fault finding

Optical time domain reflectometers send a powerful pulse up a link, and analyse the reflections. The reflected signal is very weak, and may require extensive averaging to reduce detection noise. The user has to input some information such as refractive index. From this, it mathematically deduces the power level at each point, and from this, it is possible to determine loss figures, and the location of point losses.

In order to work over a range of applications, the pulse length can be varied from a few nsec to a few msec. Short (low energy) pulses give best distance resolution, but a noisier signal, and will only work at modest attenuation levels. Short pulses may require a lot of averaging to get a good signal, which may take some minutes. A long (high energy) pulse gives fast acquisition, a nice smooth output (ideal for commissioning), but very poor distance resolution (bad for fault location ).

OTDRs have some theoretical difficulty with point losses, or reflections, in that the mathematics doesn’t work very well at that point. The point loss or reflection is actually located by the intersection of the characteristics each side of it, eg by further deduction. There will also be some practical difficulty with point losses or reflections, in that the high gain detector amplifier may saturate or become slew-rate limited, creating a blind spot immediately after the event. This is called the dead zone, and is a genuine limitation. The dead zone is also pulse length dependent. The theoretically calculated dead zone is shown in the table.

Pulse length Dead zone
1 nsec 0.15 m ( theoretically )
10 nsec 1.5 m ( theoretically )
100 nsec 15 m
1 µsec 150 m
10 µsec 1.5 Km
100 µsec 15 Km

In practice, some older instruments have a minimum deadzone of 50 meters, and more modern units have a minimum of 2 – 10 metres on the shortest pulse lengths. Also, some modern units automatically change the pulse length as the unit searches further up the link. This is obviously highly desirable.

It should also be noted that deadzone is typically specified with a fairly low level reflective impulse, such as from a mated PC polish connector. In multimode systems, the connectors are highly reflective, so longer dead zones are observed than in the instrument data sheet. This is universal in the industry, and is not the fault of one manufacturer.

Originally, OTDRs were used for long range applications over many Km on telecom style links. Effectiveness on multimode systems of under a Km in length is questionable, since dead zone effects mean that it is often impossible to differentiate one loss point ( eg connector), from another. It is often impossible to do much fault location in this type of situation. This problem is often not understood by system designers, who insist on OTDR certification on a 100 metre run. The problem ends up as this: you need the highest performance instruments, in a situation where it is of the least possible value.

Another example of this problem is with modern PON applications. An “FTTX” OTDR may have a very short deadzone specification, however to see through the loss of a 32 way splitter requires a pulse length of 1 – 10 µsec, in which case the actual deadzone is between 150 – 1,500 meters, which is not very useful on a short distance PON.

The mathematical deduction process can also lead to some peculiar effects: some splices and connectors appear to have optical gain. This happens when joined sections have slightly different characteristics, and the second section has a higher level of intrinsic back scatter than the first. However, if the same joint is measured from the opposite direction, the loss will appear abnormally high. This anomaly is solved by performing the measurement in both directions, and then averaging the result.

From all this, it should be apparent that for fault finding, the user must be careful to optimize both distance and amplitude resolution for a particular situation, and that the job will be slower than certification.

The noise reduction achievable by signal averaging is limited by the square root of the sample time. Therefore each time signal averaging is extended by a factor of 4, a 3 dB increase in range is obtained. This creates a practical limit, for example extending a 10 minute average ( fairly boring ) to 1 hour ( really boring ), only yields a 4 dB increase in range. However increasing from 1 second to 10 minutes, yields a 14 dB improvement!

Limitations of using an OTDR by itself

In fault finding applications, OTDRs look like they can measure exact distance, however the actual physical location of a fault is uncertain. Even under ideal conditions the distance uncertainty is about ± 1%, eg 20 meters per Km. Some typical causes of error are:

In practice, these uncertainties do matter, and where the cause of a fault is hidden (eg ground movement, tree roots, rocks, rodents etc ), locating the loss point using OTDRs sometimes takes man days of work, and creates a network hazard while 100 meters or more of cable is unearthed. The nitrogen marker system enables the job to be completed in less than half a day, with minimal disturbance.

Cold Clamp fault location

The Cold Clamp is a unique device developed by Kingfisher which overcomes some of the fundamental limitations of OTDRs.

The Cold Clamp works on jelly filled cables as typically used in long distance links, by acting as both a local physical and optical reference point.

A Cold clamp is attached to the cable close to the estimated fault location, but far enough away so as to avoid deadzone problems. Liquid nitrogen is poured into the Cold Clamp, which creates a temporary optical loss point of approximately 0.2 – 1 dB. This can be used as a localized reference marker which can be picked up on the OTDR. It’s distance from the fault is measured with the OTDR cursors, and then it’s physical distance to the fault is measured on the ground.

OSP crews who have used the system find uses for it in all manner of situations where they would like to know a position accurately. For example during installation, to mark known danger points on the route, such as rivers, roads, other cables etc.

Fig 11.1Fig 11.1

Fig 11.1: A typical requirement for the use of a Cold Clamp.

Case history

A link was partially broken. An ORDR trace showed a break at 4.2779 Km. The route map showed this as close to a river crossing. During a mobile phone conversation, the site crew remembered that there had been problems at the river crossing before, so they were pretty sure they knew that the problem was at the river crossing. However, the engineer in charge decided to check with a Cold Clamp.

The line was excavated and a Cold Clamp applied at a convenient point about 40 metres from the river. A trace as per Fig 11.2 was obtained, showing in general terms the break, and Cold Clamp loss point. The picture was zoomed in and the trace in Fig 11.3 obtained. This clearly showed the loss induced by the Cold Clamp at 4.185 Km and the break at 4.2779 Km. Moving the cursors to the start of each event showed a distance of 92.8 meters.

This was surprising, since this was in fact 50 meters away from the expected fault site at the river crossing. There was the inevitable discussion between the crew who thought they knew from past experience where the fault was to be found, and the measurement crew who disagreed with them. In the end, the measurement crew prevailed, the distance was measured out on the ground, and excavation revealed a fracture “within a shovel width” of the predicted location. It turned out that the construction crew had bogged a D9 dozer at the exact point of the fault.

Use of the Cold Clamp in this instance saved hours of work trying to find a fault in the wrong place, with all of the extra network hazard that this would have entailed.

Fig 11.2

Fig 11.2: Trace of the fault & Cold Clamp loss.

Fig 11.3

Fig 11.3: Detail showing the relationship between the initial break and the temporarily applied Cold Clamp loss point.

Particular points about this incident

This was an experienced repair crew, with accurate maps, route data and other aids. They had prior knowledge of the route. It was practically the ‘ideal’ situation. Despite all this, the fault was in a different place to that expected. The fault would of course have been located and fixed in time, however use of the Cold Clamp markedly improved the on-site processes, reduced costs & improved service provision.