Food Testing: Total Dietary Fibre – as good as it gets?

Analytical protocols for the determination of total dietary fibre (TDF) in food are very easy to discuss theoretically, but rather more challenging to perform practically. It is not unusual for the procedure to become one of the most frustrating routine tests in the food laboratory, and the result can easily be misunderstood by the food business client. Often that is because of the expectations placed upon both the test and the obtained results. In respect of fibre testing, nothing is simple.

The first thing to consider is the question “what is total dietary fibre?” The answer is surprisingly easy. There is a relatively recent Codex definition describing exactly what should be considered as dietary fibre1. This has been widely adopted. There are also standard methods of analysis for the determination of dietary fibre in food published by the AOAC. So there should be no problem.

Of course, there are always problems. The first being that the two standard, if not classic methods for determining TDF (AOAC 985.29 and 991.43) do not detect some of that which is defined as TDF by Codex. This is actually for the very good reasons that not only were some of these components once not considered true dietary fibre, but they are also relatively insignificant in many food products. This means that for a sizeable majority of food samples then the classic AOAC methods are still perfectly fit for purpose; which certainly came as something of a relief to the testing community.

Well, perhaps “relief” is rather a strong word to use. Although 985.29 and 991.43 are the classic methods, they are challenging to perform, and come laden with an appreciable uncertainty of measurement. Both methods are based on the assumption that if you remove everything that isn’t dietary fibre from a sample of food, then anything that you still have at the end can only be dietary fibre. The following basic steps are required.

  • Chemically remove sugar and fat from a food material
  • Use enzymes to digest starches and proteins under controlled conditions of temperature and pH
  • Precipitate out anything that has not been digested. Filter and collect the residue. This should only contain dietary fibre, residual protein and inorganic salts.
  • Dry and weigh the residue, and then analytically determine the protein content and the ash content.
  • Subtract the protein and ash from the total residue, and whatever remains is TDF.

Now, as I am sure can be appreciated, these 5 instructions turn into a very complicated procedure when enacted in the laboratory under the necessarily controlled conditions. It becomes even more complicated when a large number of tests have to be performed simultaneously, as would be the case in a high-throughput contract testing facility.

However, let us return, for a moment, to the idea that these classic methods do not detect some of that which is defined as TDF. In order to achieve complete testing for all fibre molecules it is necessary to use an alternative approach, e.g. AOAC 2009.01. This method requires the analyst to insert some HPLC analysis into the above 5-point procedure. This further complicates and elaborates the situation. Nonetheless, it must be recognised that such a method may give a more complete and hence accurate measure of the total dietary fibre content of a food. In fact, there are occasions when these recent Codex-compliant methods are absolutely necessary. If a food product has been fortified with small molecule soluble fibres then it is essential that AOAC 2009.01 (or an equivalent method) should be applied. In order to do that, it is vital that the food business discusses this requirement with its servicing laboratory before samples are submitted. This will allow the laboratory to fully appreciate the scope of the analytical demands, and then to fully advise and act accordingly.

One might then ask why any of the classic approaches to TDF analysis are still in use? Why would anyone wish to test using a method that does not detect components within the TDF definition? The answer is of course very simple and revolves around money. The classic methods can be performed very effectively and very efficiently, and in large volumes. It is not easy, but it can be done. The more recent Codex-compliant methods take appreciably more time and effort, and present appreciably greater technical challenges and capital outlay. The price charged by a laboratory for one of these tests will probably be something of the order of five times that charged for the classic analysis. The bottom line is that the vast majority of foods tested for TDF do not really need a Codex-compliant method and there is little desire to spend food testing budgets unnecessarily.

Nevertheless, surely it is preferable to report better analytical data? Well, indeed it is; but now we must consider another element of TDF test that has already been mentioned – the uncertainty of measurement (UOM). Evaluation of UOM can be decidedly tricky. Different people may have different views on the best way to do it, and there will be a number of alternative approaches used by a range of laboratories. There is not necessarily “one right way” to do it, although there will be many wrong ways. However, I am certainly quite comfortable with the idea that the best estimate for UOM of TDF testing on a range of food samples in any laboratory is likely to be between 20% and 30% regardless of the method applied. I am also quite comfortable with the idea that this uncertainly will increase at low levels – e.g. anything below 3g/100g TDF.

What does this mean to the food business operator? The most obvious scenario is as follows. Two identical samples are submitted to a laboratory for TDF testing. The obtained results for TDF are 2g/100g for one sample and 4g/100g for the other. Upon seeing this the laboratory manager will probably congratulate the lab analysts on a job well done. However, the client may be concerned that one result is 100% higher than the other and be rather less impressed. Unfortunately, in the world of food testing that may just be about as good as it gets for many samples. It also demonstrates why classic methods are perfectly fit for purpose. For most individual samples, it is unlikely that that any individual test using a Codex-compliant method would give significantly different results.

Having said all of that, it is absolutely necessary to state that testing for TDF is definitely not some sort of chemistry-based random number generator. There are a series of key elements to getting the procedures to work properly. If they are implemented and followed correctly then analytical performance is significantly improved. Even then, certain matrices, such as meat or cheese, can be exhaustingly difficult to test effectively. However, if performed well then analysis can be repeatable and reproducible, and even professionally satisfying to those involved. If analysis is performed without due attention to the key elements then it can be a total nightmare.

The availability of specialised equipment for TDF testing is generally restricted to enzyme digestion and filtration stages. The options range from automated through to relatively manual approaches. There are advantages and disadvantages associated with all alternatives, and much of these pros and cons will be to do with laboratory capacity and sample throughput as much as any technical benefit. Analysis for the determination of total dietary fibre is a very “hands-on” procedure, and in such cases the simplest approach may often turn out to be optimal.

References

  1. Jones, J. M. (2014). CODEX-aligned dietary fiber definitions help to bridge the “fiber gap.” Nutrition Journal13, 34. http://doi.org/10.1186/1475-2891-13-34

Food Testing – Sodium: small but mighty

In terms of a standard food label then there isn’t much on there as small as sodium. Proteins, starches and fibres are Mummy and Daddy Molecules, fats are pretty grand, and even sugars have something to throw around. However, the element sodium (usually the only element to be found on a label) is a diminutive little thing – particularly as it will almost certainly be in the form of the sodium ion, which, for those with a ruler, has an ionic radius of about 0.1nm (or 0.1 x10-9m).

In some foods you will find sodium up to percentage levels, and although it is essential for life, in humans it is also strongly associated with hypertension and heart disease when consumed above recommended levels. It might be small but it has a big impact! Therefore, to allow consumers to make appropriate choices it has to be declared on a food nutritional label, and therefore we need to analyse food for the sodium content.

The most common way that sodium is introduced into food is by the use of salt. Salt has been used as a food preservative for many thousands of years, and as an enhancer of flavour for almost as long. Indeed, the chemical symbol for sodium, Na, is ultimately derived from the Natron Valley in northern Egypt, which was a major source of salt for the Ancient Egyptians. Throughout recorded history salt has been significant, both culturally and as a traded commodity. It appears in religious texts, in Chinese documents dating back over 4500 years ago, and is the root of the English word “salary”. Sodium, in the form of salt, has affected the course of human civilisation!

However, although salt, or sodium chloride, is usually the major source of sodium in the diet, it is analysis of sodium itself that is required for food labelling. There are a number of reasons for this. Firstly, in a complex food product then it is not possible to analyse sodium chloride per se – one can test for either the sodium bit or the chloride bit, but not the two together. Secondly, there are non-salt sources of dietary sodium. Sodium might be naturally occurring within a food. It might also be within ingredients such as sodium bicarbonate, sodium citrate, sodium tartrate or monosodium glutamate (MSG). So, sodium doesn’t necessarily come from salt: and we must remember that it is the sodium bit that gives rise to high blood pressure, not the chloride. Therefore, we must measure the total sodium content separately from everything else.

Despite our need to analyse solely for sodium, I do have to point out that when it comes to the value presented on a nutritional label, we do convert all the sodium that we find into an equivalent sodium chloride (salt) value. This is an attempt to demystify the food label. If people think of sodium then it might be as a tiny slice of metal burning up on a bowl of water as part of a chemistry lesson – a bit exotic. If people think of salt then it is something that they sprinkle on their chips – reassuringly familiar. Therefore, we declare all the sodium as if it were friendly salt rather than the hyper-reactive metal.

There are very many ways of measuring the sodium content of a food sample. Ion chromatography is an option, although not one commonly found within the UK testing market. Ion selective potentiometry could be used, but there are inevitable matrix issues that preclude its use as a generic method. Almost always, food testing laboratories rely on spectrometric techniques; flame photometry, atomic absorption spectroscopy, or optical emission spectroscopy.

The technique requiring the least technically advanced equipment is undoubtedly flame photometry (FP). A flame photometer is not really much more than a very clever and high-tech camping stove. If a very fine spray of a solution containing sodium is introduced to a finely controlled gas flame (methane, propane or butane are all suitable fuels) then the excited sodium atoms will give off a bright orange light of a specific wavelength. If the flow of the liquid is constant, then it is possible to construct a calibration curve of intensity of light given off (emitted) against sodium concentration.

No-one can deny that flame photometry is an effective approach. However, its simplicity (and occasional apparent malevolence) tends to require a genuine level of technical competence by the analysts using the equipment. The calibration range is usually relatively small, and when in use the instruments tend to require regular calibration checks to account for signal drift. Blockages in inlet tubing and aspirators are not uncommon, and the technique can be less reliable for some sample matrices. Nonetheless, when used correctly flame photometry will provide effective, rugged and fit-for-purpose data for the sodium content of foods for labelling purposes.

Atomic absorption spectroscopy (AAS) is definitely a step up the technological ladder. In some ways it retains some features of flame photometry inasmuch as a solution containing sodium is aspirated into a flame. However, this is an air-acetylene flame, which would not be appreciated on many camp sites. In addition, a sodium lamp is used to direct a beam of light, of a very specific wavelength, across the flame. It is the amount of that light which is absorbed by the sodium atoms in the flame, rather than that which is emitted, that is indicative of the sodium concentration of the aspirated solution.

Once set up, the determination of sodium in food by AAS is an excellent approach to the question at hand. However, anything involving acetylene requires careful thought and consideration, and again, a genuine level of technical competence can be most beneficial. Ionisation of sodium can be an issue and the addition of a suitable suppressant, such as caesium chloride, can be recommended. Having said that, with a very large calibration range, stability of signal and the high-energy flame helping restrict interferences, AAS can be exceedingly rugged, effective and sensitive process for this purpose.

The third and most expensive technique that is routinely found in food testing laboratories is inductively coupled plasma optical emission spectrometry (ICP-OES). This is another step up in technology, although the principles will still appear familiar to the flame photometrist. Sample solutions are aspirated into a plasma (rather than a flame) and the emission of light of a wavelength specific to sodium can be measured. ICP-OES has the largest laboratory footprint of the three techniques, also requiring high vacuum and a supply of a gas, usually argon or nitrogen. Ionisation issues can be dealt with similarly to AAS, and the very high energy of the plasma helps minimise interferences and give quite sensitive responses with a very wide dynamic range for calibration purposes.

There are undoubted advantages and disadvantages associated with each of these analytical approaches. A laboratory’s method of choice may well be determined by capacity, facilities and capital expenditure requirements rather than any technical demand. There have been suggestions that for some food matrices there may be differences in the results obtained by differing methods, but generally speaking sodium analysis should give a reliable result regardless of the equipment used.

Food Testing: Sulphur dioxide – saint and sinner

It was very interesting to see sulphur dioxide appear in the news recently. No less a body than the LGC described the accurate detection of SO2 at low levels in a difficult matrix as “the most challenging investigation” within its Government Chemist Annual Review for 2016. The work was carried out as part of the referee function of the LGC as it examined foods containing components such as garlic, known to present positive interferences to standard test methods.

When used as a food preservative, sulphur dioxide is added in the form of a sulphite or metabisulphite and there is no doubt that testing for sulphite residues can be markedly affected by interfering species. Since the official acceptance of the Monier-Williams procedure back in the 1920s, the story of sulphur dioxide testing is one littered with attempts to deal with this very problem. But perhaps, as someone might once have said, it would be best to start at the very beginning.

Sulphur dioxide itself is an unpleasantly toxic gas. It is produced within nature by volcanoes and forest fires, and also industrially by the burning of fossil fuels. It is a significant contributor to acid rain. All of this is probably very bad. However, it is a wonderfully effective anti-microbial, particularly against yeast and moulds, and is used extensively both as a food preservative and also during the production of wine and beer. All of this might be considered very good. As with so many chemicals, it is the dose that makes the poison – and I must give credit to Paracelsus for that one.

The other problem with sulphur dioxide is that it can provoke allergenic-type reactions in sensitive individuals, particularly those with asthmatic tendencies. This reaction can be quite severe even when the dose is quite low. Therefore, although one would struggle to categorise SO2 as an allergen in the strict sense of the term, it is classified within EU food labelling law as a declarable allergen. Along with gluten, it is also exceptional as having a lower limit below which it can be considered as absent; in the case of sulphur dioxide that limit is 10 mg/kg.

This takes us to the hub of the problems associated with testing for sulphur dioxide. Firstly, there are legislative limits for its use within food as a useful food additive. These can range from a maximum of 2000 mg/kg in dried apricots, down to a maximum of 10 mg/kg in table grapes. Secondly, if there is any present it must be declared as an allergen, but only when the level is greater than 10 mg/kg. This gives a significant analytical requirement of accuracy and precision at higher levels, but an extreme analytical requirement at and around the 10 mg/kg level.

At this point it is only fair to recognise that much of the testing for sulphur dioxide in food is relatively routine, and certainly fit for purpose. There are many published procedures that can be used on many products without any apprehension. However, when presented with a new or difficult matrix then many standard approaches may be compromised, and the idea of a one-size-fits-all analytical approach may be flawed.

So how does the diligent and conscientious analytical chemist approach this challenge? The vast majority of analytical methods are, at least in part, based upon the afore-mentioned Monier-Williams method. This procedure usually requires a food sample to be gently boiled in acidic solution under a condenser and with a low flow of nitrogen gas as a carrier. This should allow the analyst to distil off the sulphur dioxide gas whilst retaining other volatiles. The sulphur dioxide is usually collected in a receiver solution of dilute hydrogen peroxide, thus forming sulphuric acid which can be easily titrated using standardised sodium hydroxide solution.

The obvious problem is the “whilst retaining other volatiles” part. If any other acidic volatiles are not retained then false positives will be achieved. There are many strategies that can be used to sidestep this issue. The final titrated solution can be treated with barium chloride, and the resulting barium sulphate determined gravimetrically – but this takes some time, and limits of detection will be an issue. The receiver solution can be replaced with standardised iodine, and the reducing power of sulphur dioxide can be measured by titration – but just as acidic volatiles can be a problem, so can volatiles with redox potential.

The selection of acidification reagent itself may come into play: hydrochloric acid releases sulphur dioxide very well, but if the distillation is too vigorous it may be encouraged into the gas phase itself and become an interferent; alternatives such as phosphoric acid may not enter the gas phase, but could also struggle to release the SO2, particularly in high-sugar products. In addition, the distillation solution can be diluted with methanol in order to reduce the boiling point and so aid in the retention of the volatiles – but some of those volatiles, particularly those associated with onions and garlic, seem just as flighty as SO2 itself. Consequently, it is clear that it is possible to adjust and optimise almost every part of the test procedure. This can even include sample handling; vigorous mechanical homogenisation of sulphated apricots can lead to low recoveries – the inescapable laws of thermodynamics releasing SO2 within the blender.

In light of all this, perhaps it is not surprising that testing for sulphur dioxide residues can be such a challenge. It is certainly interesting that the LGC had to turn to the ultimate analytical tool in the form of mass spectrometry to resolve their analytical problem. There are methods using ion chromatography post-distillation (or even without distillation) that have been well documented, and these are certainly both specific and sensitive enough in most instances. As a chemist who enjoys the slightly black-art of IC then it would have been my first port of call. However, the fact that the LGC was required to move further into liquid chromatography-mass spectrometry not only highlights the analytical difficulty, but also the stimulating and interesting puzzles that can be presented to the food analyst.

Food Testing – The Ancient and Secret History of ASH

Of course, there is isn’t much real Ancient and Secret History – although it is a diverting acronym. Nonetheless, the fact that you can take pretty much anything you like and change its very nature by incineration at high temperature has been known and often deliberately obscured for many millennia. Indeed, although the apparent magic of metalworking has been around for at least 10,000 years, it arguably wasn’t until the 18th century that Lavoisier demonstrated the principles of constant mass and began to understand exactly what might be happening in a furnace. But what, one may ask, has any of that to do with food testing.

Analysis for ash is actually one of the oldest food authenticity tests. Determinations of ash were used by the Victorian food analysts, the original analytical chemists, to indicate adulterations in such essentials as flour and spices. It must have been tempting for the spice merchants of old to bulk out their products with a shovel or two of suitably coloured dry soil, and thereby help extend a healthy profit. It must have been pretty tempting then and it is probably pretty tempting now, but soil is pretty much full of ash and spices certainly aren’t, so this sort of gross adulteration is readily revealed. As well as adulteration, ash content can also be a very good indicator of specific features of a food product. For example, high or elevated levels of ash in minced meat can indicate high or elevated levels of bone being present, which could characterise a specific butchery procedure having been applied.

But what, after all, is “ash”? It will come as no surprise to learn that whilst it is almost impossible to specifically define the chemical nature of ash, it is very easy to define the physical process that it results from. Therefore, in food testing ash is simply “the residue remaining following incineration”. That incineration usually occurs at a temperature between 500°C and 550°C. It is possible to propose a lower temperature, and higher temperatures are occasionally used in specific circumstances, but for general purposes the 500/550 °C range seems to do the job pretty well. At that temperature virtually all the organic material is destroyed and, most of the time, a dark grey to white ash residue remains. It is obviously a collation of all the inorganic material that was either a thermally stable salt to begin with or has reacted to form a thermally stable salt within the furnace. This ash will then comprise such compounds as sulphates, chlorides, oxides, phosphates etc. This does mean that the ash of a food material can range from anything from zero (e.g. pure sugar) through to 100% (e.g. pure salt).

The main purpose of determining the ash content within food testing is usually as an estimation of the mass of inorganic material in the food. The result is then used to calculate a value for carbohydrate within the food and hence ultimately the energy content of a food. These calculations are based upon the very sound assumption that the major constituents of a food comprise only moisture, ash, fat protein and carbohydrate. It is relatively easy to determine levels of moisture, ash, fat and protein so whatever remains must be a carbohydrate of some form.

The actual process of reducing a food material to ash can also be used as a preparatory stage within food analysis. This is particularly true with reference to the determination of a specific mineral such as sodium. Complex materials such as food almost always need pre-treatment prior to elemental analysis, usually either a dry ash, or alternatively a wet oxidation using acid. Having determined the ash content of a food it is an easy subsequent step to then use that residue for a sodium determination.

However, it should not be assumed that the incineration of a sample of food is necessarily an easy or simple process. Although the physical conditions required are severe, the treatment of the test portion can be very important. Certainly, introducing a mass of wet material to a furnace already at 550°C is likely to be problematical, as the immediate and ferocious boiling of water will almost certainly cause the material to “spit” and send fragments flying in all directions. It is a lesson that many trainee analysts have to learn, and in my particular memory raw sausages and cooked rice were always samples to be wary of – simple drying or charring of the test portion is almost always a good idea before placing in a hot furnace. Even then, random samples sometimes appear astonishingly resilient to even a lengthy incineration process; although it is usually nothing that a drop of deionised water and another 30 minutes in the furnace cannot resolve.

There are also choices of instrumentation to be considered. Not only are traditional laboratory “muffles” used, but also microwave units that can complete an incineration in significantly shorter time periods. Both methods can be used most successfully. However, although there are a number of circumstances where a microwave would be my method of choice, there is no doubt that an overnight incineration in an old fashioned laboratory furnace is very hard to beat as a rugged and routine procedure.

Food Testing – Sugars: although the chromatography can be sexy it is the extraction that makes it so sweet.

The sugar content of food is of considerable interest and importance in modern western society. Sugar is cheap and we are eating more sugar than we did. The evidence certainly suggests that at least some of the rise in obesity and diabetes we now see may well be significantly influenced by this increase in sugar intake. Therefore, if want to reduce both our use and our reliance on sugar as an ingredient in food, we do need to know how much is present in our food.

At this point it is probably best to just confirm a definition of sugar. In food sugar is not just the stuff that you put in your tea. White table sugar is sucrose, which is just one of the six sugars that are usually considered as being present in macro-amounts within food. The six that we tend to group as “sugar” are sucrose, glucose, galactose, fructose, lactose and maltose. The determination of sugar in food is usually performed by chromatography to identify and quantify those sugars present.

The use of chromatography is likely to be restricted to one of two options. The first is HPLC (High Performance Liquid Chromatography) using a refractive index detector, although alternatives for this detection method are available. The second is ion chromatography using HPAEC (High Performance Anion Exchange Chromatography) with an electrochemical detector. Both of these approaches can work well and both can have their difficulties.

It is at this point that I must admit to something of a bias. The very first chromatography system that I ever used was an ion chromatography unit for sugar analysis, and I have been in love with the technique ever since. Both in terms of its stability and its ability to resolve each sugar into a separate measurable peak within a relatively short time it is un-rivalled. It is also wondrously sensitive, although this is a double edged sword from an analytical perspective as many foods contain a lot of sugar which can overload a sensitive system. The price tag associated with these systems can also be prohibitively expensive, but one has to accept that they do not develop and build themselves. Nonetheless, when one does exploit the latest innovations within HPAEC then the results can be most impressive, not to say sexy. The following is taken from a new system that I have recently set up.

0.1% MSS

It is a mixed sugar standard with each peak representing 0.1g/100ml of each sugar. The seventh sugar in this mix is xylose which has been added as an internal standard. Each component is fully resolved with highly efficient measures of chromatographic separation. It is therefore possible to very accurately determine the concentration of each component within the test solution.

It is unlikely that there is a more significant sentence in this essay than the previous one. It is therefore the test solution that is the most important factor within our analysis. If an analyst does not fully extract the sugar from a test portion then it does not matter which chromatographic technique is employed, or how clever or expensive the equipment is, the results will be inaccurate.

Regardless of the instrumentation used, when determining sugar content a test portion is usually extracted in water, or a water-based extraction solvent. There are many imponderables within this extraction process. There are simple physical parameters to be considered, such as the force of any mechanical action employed to disperse a sample within the extraction solution; or the duration of dispersal and the temperature at which it is carried out. There is also the behaviour of the test material itself – a raw flour contains enzymes that will start breaking down starch into sugar when warm water is added, potentially leading to high results. There is the handling of the test material – experience suggests that sugar will break down within homogenised wet samples stored for a few days in a refrigerator, which will lead to low results. It is important to take into account all such factors.

Such issues are not unique to the analysis of sugars. Any chemical test requiring a similar extraction process will be comparable. However, with sugar content being part of the legal requirement for the nutritional declaration on a food label it is unlikely that there are many other similar analyses performed in such volumes and in such frequencies and on such varied sample matrices. When a laboratory does marry a rugged and effective extraction process with an appropriate chromatographic system then one can be confident that the obtained results will be really sweet.