One of the key arguments made for replacing bomb sniffing dogs with electronic noses is that the dogs are fallible and have the potential to be distracted by the presence of sausages whereas electronic noses are not. Yet defenders of the dogs argue that it is not so much that the dogs are making mistakes as they are being poorly taught by their handlers. Canines are currently judged during training and testing by how well they find intentionally hidden explosives. This method makes intuitive sense because handlers cannot see the odours themselves and certainly cannot smell them, so their intention is used as a surrogate for whether or not an odour is actually there. However, unintentional explosive odours can easily be presented and damage dog training. Now a team is revealing the invention of a vapour analysis device with real-time detection capabilities that will allow handlers to visualise the odours they are exposing to their dogs and vastly improve training.
The new technology has a detection library of nine explosives and explosive-related materials including some of the big baddies like nitroglycerin, triacetone triperoxide (used in the Brussels blasts) and cyclohexanone. It has detection limits in the parts-per-trillion to parts-per-quadrillion range and has the ability to reveal vapour plume dynamics. The team used the device as expert trainers were training and testing their dogs and the technology revealed that handlers were making mistakes during training. For example, in one test where the handlers believed they were presenting their dogs with 28 envelopes that were tainted with traces of the potent explosive RDX (trinitroperhydrotriazine) and 68 untainted controls, the researchers found that only 27 of the envelopes were actually tainted with the explosive and that six of the controls were carrying enough of the explosive residue to be detected by the dogs. This meant that on six occasions when dogs (correctly) identified the presence of the explosive on control envelopes they were treated as if they had made an error when they truly had not. You can read more in The Economist article that I wrote on this here.