Category Archives: Global

Refractory has always been an integral part of steam-generating boilers. In the steam-generating industry, refractory materials are used for filling gaps and openings to help keep the fire inside the fire box, for lining ash hoppers (wet and dry) that collect ash and slag, and for the protection of lower furnace wall tubes inside fluidized bed, cyclone-fired, or refuse-derived fired boilers.

Because refractory is one of the smallest components on a steam-generating boiler, it lacks the attention it deserves. And yet, it has been proven that when properly designed and installed, refractory can save up to five to seven percent of fuel costs (oil, gas, coal, or refuse).

  • The primary cause of boiler inefficiency, and a major contributor to boiler shut-downs, is refractory failure. Discovering why a refractory material fails is a complex problem because failure is not caused by just one factor, but rather a combination of the following factors:
  • The material selected does not match the environment that exists (i.e., reducing atmosphere);
  • The material selected does not match the fuel being burned (i.e., the amount of alkali, sulfur, hydrocarbons, vanadium, or moisture present in the fuel);
  • The material was improperly stored, mixed, installed, cured, and/or dried;
  • The material selected did not match the environment created after the burning of the fuel (i.e., ash and slag).

It is helpful to understand the materials that could impact a refractory product (excluding installation) in order to fully understand the failure itself.

A Look at Materials—Failure in the Making?

Slag can reduce furnace heat absorption, raise exit gas temperatures, increase attemperator spray-flow temperature, and interfere with ash removal or equipment operation. Slag is the formation of molten ash that is either partially fused or re-solidified ash deposits (ash fusion), formed based on the ash temperature and composition.

For slag to adhere to a surface and form deposits, the ash particles must have a viscosity low enough to wet the surface. If iron is present, it will raise all four values of ash fusion temperatures (initial deformation, softening, hemispherical, and fluid). The greater the iron content found in the ash, the greater the difference in ash fusibility between the oxidizing and the reducing condition.

There are two kinds of ash. Coal ash is the residual product left after burning of the fuel. Oil ash is the residual product left after burning off oil. Coal ash that has a low fusion point and high basic oxide content can be very corrosive to refractory materials; and oil ash that contains vanadium, sulfur, alkalis, and hydrocarbons can cause severe problems to refractory materials.

Vanadium can act as a catalyst, forming a low-melting alkali-silica compound that could react and break down the basic components of a refractory material.

Sulfur can combine with lime and iron oxides found in some refractory materials and can reduce material strength. In the presence of moisture, sulfur also could form sulfurous and sulfuric acids, which could react with the basic components of a refractory material.

Alkali such as sodium (Na) and potassium (K) can chemically react with silica found in some refractory materials.

Hydrocarbons, in conjunction with a reducing atmosphere (which refers to the amount of air required for proper fuel combustion and is usually added to the combustion process in another location), can react with iron oxides and form large carbon deposits in the refractory material. This eventually could cause a spall (loss of fragments or pieces) on the refractory surface.

Finding the Root Cause of a Refractory Failure
Step 1: Discovery Process

It is necessary to collect and document some basic information. In many cases, the discovery process requires interviews with plant and installation personnel. These professionals know first-hand about the refractory process. The following information should be identified and documented:

  • Material samples and data sheets of the existing brick or refractory lining—this information may be supplied by the purchasing agent or by the plant engineer;
  • Material samples of the ash clinkers and slag—samples may be supplied by maintenance or engineering personnel from the plant;
  • Chemical analysis of the fuel being burned (coal, startup oil, refuse, wood, steel, aluminum, etc.)—this information may be supplied by the plant engineer;
  • Storage location and duration of the storage prior to installation—this may be supplied by the plant or the installation contractor;
  • Manufacture date of the refractory material—this information may be supplied by the refractory manufacturer;
  • Ambient condition at the time of the installation—this should be supplied/verified by both the plant personnel and the installation contractor;
  • How much material was installed—this information may be supplied by plant personnel and/or the installation contractor;
  • How it was installed or applied (pneumatically, toweled, poured, shotcrete, etc.)—this may be supplied by the plant or the installation contractor; and
  • How the material was cured and/or dried and what procedures were followed—this information may be supplied by the plant and/or the installation contractor.
Step 2: Examine the Existing Material and Testing

The existing material (or the lack thereof) should be examined for signs that may indicate the root cause of the failure. When looking at an existing refractory lining or photos of the existing lining, keep in mind the following questions:

  • Did the material fail due to thermal shock (large sections of the top surface area sheared away)?
  • Is there any evidence that the materials had been exposed to excessive temperatures (excessive shrinkage, glazing, etc.)?
  • Is there any evidence of mechanical abuse (broken and jagged edges or holes)?
  • Did the material fail due to the operation of the equipment, furnace, or boiler?
  • Was the refractory material installed improperly (i.e., porous or popcorn-like texture)?

Collecting Samples for Testing: Samples of the existing refractory material should be gathered and sent out for a cold crush test, which will verify the strength of the installed material. The results can be compared to the manufacturer’s material data sheets. If the strength of the existing installed material is low, it is probable that the mix was too wet when installed.

Samples of the existing slag and ash clinkers should be gathered and sent out for chemical analysis. The slag samples should also have a pyrometric cone equivalent (PCE) test performed to verify the minimum temperature that the refractory may have been exposed to.

Step 3: Calculate the Base-to-Acid Ratio

The next step is to document the environment to which the refractory material was exposed. One way to do this is by calculating the base-to-acid ratio (b/a), using values taken from the information received from the chemical analysis test mentioned in Step 2. This b/a value will give a starting point as to what type of refractory material should have been chosen.

Here is one way to calculate the base-to-acid ratio:

When the base-to-acid ratio is less than or equal to .25, it indicates an acid condition. An acid condition would indicate that a SiO2 type refractory should be considered.

When the base-to-acid ratio is greater than .25 but less than .75, it indicates a neutral condition. A neutral condition would indicate that an Al2O3, SiC, or chrome type refractory material should be considered.

When the base-to-acid ratio is greater than or equal to .75, it indicates a basic condition. A basic condition would indicate that an MgO or Dolomite type refractory material would be considered.

Step 4: The Review Process

Now it is time to analyze all the information gathered in Steps 1 and 2. All of the service conditions must be reviewed and analyzed thoroughly in order to see how they could affect the installed/failed material. This includes the fuel or raw materials being burned, startup fuel used, ash and slag content, gas temperatures, and plant operations and procedures. For example:

  • Moisture content in the fuel can affect the refractory material. High moisture content or combined moisture content in the fuel with a reducing atmosphere can cause a separation of silicon carbide base materials (grain). This separation can occur when the total percentage of the moisture content found in the fuel is greater than fifteen percent, or when the combined total percentage of the moisture content in the coal and the reducing atmosphere percentage are greater than fifteen percent.
  • Certain amounts of chemicals (iron oxide, potassium, or sulfur) found in the fuel, slag, or ash could react with cements (calcium-aluminate) that are present in a cement-bonded type refractory, especially if a reducing atmosphere is present.
  • Certain startup fuel (i.e., #6 oil) may contain vanadium, which could react with the silica and lime in the cement found in a cement-bonded-type refractory. When vanadium is present, it can cause a chemical attack and surface failure, or cause a complete refractory failure (no refractory present).
Step 5: Review of Installation Procedures

The final analysis also must take into account proper installation procedures. All of the items listed below could prevent a refractory material from reaching its proper strength. A refractory material that is not able to reach its designed strength has the highest potential for failure.

Properly Manufacture Date and Storage: Refractory material should be manufactured in the proper time period based on the installation date and manufacture date. One year is recommended for a cement-bonded material used for conventional seals inside boilers, and three months or less for materials used in high temperature and abrasion areas such as those found inside fluidized bed boilers, cyclone fire boilers, or wet bottom ash hoppers. Refractory material always should be stored in dry, well-ventilated conditions. Use fresh refractory materials and follow proper storage procedures to ensure that the refractory will not lose strength.

Proper Water for Mixing: Many common industrial compounds can easily contaminate a refractory mix and seriously affect its strength. Certain salts can react with the refractory cement to make the material almost useless. It is recommended by most refractory manufactures that potable water (suitable for drinking) should be used for mixing. The use of the wrong type of water (e.g., river water) will hinder the ability of the refractory material to reach its proper strength.

Equipment and Pot Life: Using the right type of mixer, following proper mixing procedures, and being aware of the materials’ pot life also must be considered. Using the wrong mixer or pneumatic gun also could impact the strength of the refractory material. For example:

  • Many pneumatically applied refractory materials require the material to be pre-wetted prior to the actual mixing and installation. If the installing contractor had used a continuous feed mixer (i.e., one that adds dry material into a hopper and the water is added only at the nozzle), the material could not be pre-wetted do to the nature or characteristics of using a continuous feed mixer. This could reduce the strength of the installed material.
  • Every refractory material has a pot life, which designates how long a mixed refractory material can be used after mixing. Failure to follow recommended pot life times could result in a refractory material not reaching its proper strength. If the pneumatic installation of the refractory is interrupted for a period of time longer than the recommended “pot life” time period, the material found in the mixer and hoses should be discarded and not re-used.

Ambient Conditions: Cold or hot weather could adversely affect the strength of a refractory material. It is recommended by most refractory manufacturers that the final mix temperature should be in a specific range. Though they all differ slightly, it is recommended that the final mix temperature should be in a range of 40° to 90° F. It is also very important to protect the installed materials from freezing for a minimum of forty-eight hours or until thoroughly dried. Failure to take into consideration the ambient conditions at the time of installation could impact the ability of the refractory material to reach its proper strength.

The following formula is one way to estimate/adjust the variables relative to mixing a refractory material (e.g., water temperature, air temperature, storage temperature):
X=[(W * T) + .22 (Wc * Ts)] / (W+.22Wc)

W = weight of water (a quart of water weighs 2.08 pounds)
Wc = weight of dry refractory
T = temperature of water (degrees F)
Ts = temperature of solids (degrees F)
X = temperature of mixed refractory (degrees F)

Using the above formula and knowing the ambient conditions at the time of installation can help determine if the installed material was adversely affected by the ambient conditions.

Curing Procedures: Only after the refractory material has been cured and/or dried will it be at its proper strength. Almost all refractory materials (except those that are phosphate-bonded) must be cured prior to the drying process. Failure to properly cure a cement-bonded refractory material is the number one contributor to refractory failure and lack of longevity. Curing allows the chemical action to take place inside the refractory and helps ensure that the refractory can reach its maximum strength when properly dried. It is recommended that the surface of the refractory be kept moist (curing compound, wet canvas bags, or spraying water) or the surrounding atmosphere humid for a period of at least twenty-four hours.

Drying Procedures: The dry-out or bake-out of the refractory will take place after the curing period and removes all mechanical and chemical water left in the installed material. It allows the refractory material to reach its proper strength. Unlike the curing of refractory, which is done right after the installation (usually by the installing contractor), the dry-out can be completed any time. This does not apply to phosphate-bonded refractory materials, however, as a phosphate-bonded material must be cured and dried at the same time. A phosphate-bonded material must be dried within the first two to three weeks after installation because such a material will begin to absorb moisture from the surrounding atmosphere. Eventually, over a period of two or three weeks, the material will begin to slump and fall off.

New lining should be heated gradually to let the moisture escape and reduce internal stresses. The rule of thumb is to base the hold time on the thickness of the thickest area of refractory lining found on the entire work project. For example, if the thickest area is four inches thick, the hold time is four hours.

The following heating schedule is of a general nature for ideal conditions for a one-inch-thick refractory lining on fluidized bed furnace walls:

  • Raise temperature at 75°F per hour to 250° to 400°F range. Hold for two hours at 250° to 400°F
  • Raise temperature at 75°F per hour to 600° to 800°F. Hold for two hours
  • Raise temperature at 75°F per hour to 1,050° to 1,200°F. Hold for two hours
  • Raise temperature at 75°F per hour to operating temperature

In many applications, conservative heating rates can be followed without great penalty; but in some cases, such rates are uneconomical from a production standpoint. Each material has its own allowable deviation for conservative heating schedules. Check with the refractory manufacturer or refractory expert for a compromise between safe heating rates and operating costs. Following improper dry-out procedures can have the following adverse effects upon the refractory: It will never reach its maximum strength, and it will contribute to a spall at the refractory surface.

Know Your Quantities: Finally, to help ensure that the refractory material is not installed too thickly or too sparsely, it is important to know the quantity of material required for proper installation. This quantity then can be compared to the amount that was installed. For example:

  • Insufficient material installed on the lower furnace walls inside a fluidized bed boiler could contribute to excessive stud and tube wall failure.
  • Excessive material installed on the lower furnace walls inside a fluidized bed boiler in conjunction with not following proper curing and/or drying procedures could contribute to a complete refractory material failure.
Final Analysis

Finding the root cause of a refractory material failure must take into account many different factors, such as material selection and manufacture date, plant operations, material storage, mixing, installation, curing, and drying. Only by understanding all aspects pertaining to the design and installation of the refractory material can one find the root cause of the failure and help eliminate future failures. A refractory failure is the number one cause of boiler inefficiency and a major contributor to boiler shut-downs. Refractory that is properly selected and installed will always last longer, help minimize the amount of shutdowns required, and lead to savings in annual fuel cost. Experts agree that refractory designed and installed to save energy also saves money at a rate that is essential for efficient plant operation.

1 ASTM C-64.
2 Refractories in the Generation of Steam Power – McGraw-Hill Book Company, F. H. Norton (1949).

Figure 1
Figure 2

Spalled refractory inside ash hopper

Figure 3

Wet bottom ash hopper with failed refractory areas

Figure 4

Fly ash inside penthouse between screen tubes at middle of boiler

Figure 5

Slag sample with furnace wall tube pins studs embedded in slag

Figure 6

Closeup of slag with studs from furnace

Figure 7

Shotcrete application mixer with 2000-pound refractory tote bag hanging above mixer

Figure 8

Close up of furnace wall studs worn away due to failed refractory

Figure 9

Slag samples taken from furnace

Figure 10

Furnace walls inside a fluidized bed boiler without refractory

Figure 11

Spalled refractory inside on furnace floor

Figure 12

Gun application of refractory

Manufacturers and operators of industrial heating equipment (furnaces, boilers, and kilns) are continuously examining ways to reduce the large energy losses that occur through the surfaces of their heating equipment. The recent trend of rising energy costs is pushing the need to reduce these heat losses even further. A proven and effective way to combat energy losses and the resultant high energy bills has been to optimize selection of the thermal barriers (i.e., refractory and insulation materials) used in process heating equipment. Some new and emergent thermal barriers in the market now are opening attractive opportunities to leverage refractories for energy savings and improve productivity while protecting the structural integrity and operability of heating units.

Heat loss occurs through the walls of process equipment via conduction, convection, and radiation. In the case of industrial metal melting furnaces, these losses account for ten percent to as high as fifty percent of the total energy consumed in the melting process.1 The U.S. Environmental Protection Agency (EPA) estimates that a properly maintained insulating refractory system can improve the thermal efficiency of the heating process by as much as fifty percent.2 A U.S. Department of Energy (DOE) study on opportunities to improve industrial energy efficiency estimates that refractory improvements in heating systems could facilitate energy savings from 166 trillion Btu per year in the near term to 830 trillion Btu per year over the long term.3 These savings translate to a $1.3 billion to $6.8 billion annual reduction in the current expenditures of the U.S. industrial sector for natural gas.4 To add to the bargain, DOE estimates that installing thermal insulation material and maintaining the refractory linings can potentially have a simple payback period of one year or less in some industrial process heating applications.5

Extending the Role of Refractory Beyond Protecting the Walls

The internal surfaces of many process heating vessels usually require protection from high operating temperatures and harsh combustion and chemical reaction environments. Refractory materials are employed to protect the internal surfaces and contain the heat within the structure.6 They are usually nonmetallic ceramic or brick-like blocks. The assembly of refractory systems requires complex masonry-like skills because, in addition to walls, they must protect the many entry and exit ports on a vessel. The physical property demands on refractory materials, in addition to mechanical strength and thermal conductivity, are extensive and include: chemical composition and resistance, density, porosity, permeability, thermal expansion coefficient, spalling resistance, and thermal cycling robustness. Premature corrosion, erosion, and wear of refractories can lead to contamination of products, add to operating costs, and introduce structural integrity concerns to furnaces and boilers.7

Most refractory materials research and development (R&D) has focused on enhancing their performance as “refractories”—i.e., unwavering corrosion, erosion, and wear resistances under the elevated temperatures and harsh environments of industrial heating units. These are the properties that control maintenance requirements and downtime. In addition to providing protection to the shell, however, refractory systems can be designed to minimize heat transfer to the exterior, lowering energy losses and enabling more efficient equipment operation. Insulating refractory systems work to enhance energy efficiency in two ways:

  • Reduce energy loss through the structure walls (thermal conductance); and
  • Minimize heat storage in the walls (thermal capacity), which lowers startup and shutdown losses.

Innovative advances in refractory technology offer new energy-saving opportunities in equipment rebuilding and new equipment design by combining a range of refractory materials to achieve thermal insulation along with physical and chemical resilience at high temperatures. Such optimum refractory designs can result in higher throughput, higher operating temperatures, and overall better process efficiency. When selecting the proper insulation or refractory materials, manufacturers must make a balanced choice, examining the cost of the material versus the dollars saved in energy bills. It is not uncommon to come across configurations in which ten to fifteen different types of insulation and refractory materials are deployed in a single furnace or boiler unit.

The impact of better refractories is unlikely to change external insulation requirements. These will remain about the same since the shell wall temperature is one of the key determinates of the structural design and external insulation is significantly less costly than refractory materials. Refractory materials are implemented because the shell cannot withstand the high temperature or corrosive environment within the process heating vessel. When it is possible, a manufacturer will attempt to operate without a refractory and simply use an insulation barrier on the outside. This decision is made based on short-term costs, and it can be very costly in the long term.

Emerging Insulating Refractory Technologies

The following emerging refractory technologies have been shown to improve the thermal efficiencies of various high-temperature industrial heating equipment.

Micro-Porous Silica

A commercially available micro-porous silica-based material can be installed between the refractory lining and the metal shell of an industrial process heating unit. The material reduces convection, conduction, and radiation losses, which are causes of major energy loss in industrial process heating equipment. This material reduces convection losses because the microscopic voids or gaps between the ceramic particles and fibers are so small that the air carrying heat cannot travel through them. Although these particles are close together to form voids or gaps that minimize air convection, they are designed to be far enough away from one another to minimize the conduction of heat through the solid material. Finally, radiation losses are minimized because the material consists of specially selected particles called “opacifiers” that are designed to reflect, refract, and re-radiate energy and prevent it from passing through the microporous material.8

Since this material has very low thermal conductivity of 0.038W/mK at 1,472°F (800°C), the refractory layer can be made thinner to increase capacity of an existing heating unit. These liners have been shown to reduce the internal refractory/insulation thickness by as much as fifty percent and reduce startup and shutdown heating requirements due to the reduction in refractory mass. These liners are designed to withstand continuous operating high temperatures of up to 1,832°F (1,000°C).9

Monolithic Refractory Ceramic

Trilliam Thermo Technologies, with the support of the DOE’s Inventions and Innovations initiative,10 has tested and validated an innovative refractory material, G-5, that offers numerous cost-saving advantages in high-temperature heating equipment. This monolithic refractory material offers greater dimensional stability, increased wear resistance, and improved thermal shock properties when compared to conventional materials. G-5 has one-third the thermal conductivity of conventional refractory material, thus reducing heat loss from the walls of the heating unit and lowering the energy consumption.

Instead of conventional refractory brick, G-5 refractory material is applied to the heating unit by spraying the material on the interior of the structure, which reduces the downtime for refractory replacement when compared to typical brick refractory materials. As is true with any thermal barrier improvement in refractory, G-5 improves the product to fuel use ratio. The material can be used in a variety of process heating applications including rotary kilns employed in the cement industry, boilers in the forest products industry, and furnaces employed in the metals and glass industries. This material was proven to sustain temperatures in excess of 2,000°F (1,100°C) and test results have been validated to 4,395°F (2,423°C) for over 12 hours.11 To learn more about this material and DOE’s involvement, please visit DOE’s Inventions & Innovations website, www.eere.energy.gov/inventions.

Advanced Nanoporous Composite Materials for Industrial Heating Applications

The Lawrence Berkeley National Laboratory, in a cost-shared partnership with DOE, has been researching new insulating composite materials for process heating applications. The result is the development of a low-cost nanoporous ceramic based on alumina, chromia, and silica. The material is an aerogel-based ceramic. This material can be synthesized using inexpensive chemicals as opposed to using the traditional expensive solgel alkoxide precursors, reducing the cost of the raw materials by two orders of magnitude.

In testing this material, it demonstrated approximately half the thermal conductivity as compared to the lowest density firebrick, thus providing far better insulation and reduced energy loss from the walls of the heating unit. The material also demonstrated only three percent shrinkage over nearly 2,000 hours (or 83 days) at 1,832°F (1,000°C) and only four percent shrinkage after 6,300 hours (or 262 days), demonstrating the uniquely extended life of the material. Shrinkage is important to monitor because it can create cracks and gaps in the refractory and exposes the shell of the heating system to high temperatures and corrosive materials. Swelling causes additional stress on the refractory bricks, which in turn can put more stress on the shell and lead to spalling of refractory. These nanoporous composites can be applied in a variety of industries that involve processing large quantities of materials at high temperatures, such as in the metals, chemicals, refining, forest products and cement industries.12 More information is available on the DOE’s Industrial Materials of the Future website at www.eere.energy.gov/industry/imf.

New Refractory Materials for Black Liquor and Biomass Gasification

One barrier to using gasification for black liquor is the need for refractory materials that can withstand the extremely harsh operating conditions of high alkali concentrations, elevated temperature of 1,742°F (950°C), and severe gas/liquid flow characteristics. This corrosive environment results in significant losses of refractory materials and metallic components, causing structural and safety issues, thermal efficiency losses, and unacceptable maintenance costs and downtime. The current short refractory lifetime in black liquor gasification makes maintenance costly and does not fit the scheduled mill maintenance, which typically runs on an annual basis.

Researchers from Oak Ridge National Laboratory, in partnership with DOE, developed improved corrosion-resistant refractories for gasifiers. The materials were developed by gaining a better understanding of the failure mechanisms of existing materials and using modeling to understand the fluid flow inside the gasifier. In 2004, new corrosion-resistant spinel refractory materials developed as a result of this research were installed in an operating black liquor gasifier unit. When the unit was shut down after one year for inspection and maintenance, it was found that the new refractory material did not require replacement. Although initially this material cost more than the traditional gasifier material, it proved to have a longer service life. Further, since the material did not break down under the harsh environment of the gasifier, it provided optimal insulation for the gasification process, saving energy costs in addition to maintenance costs. This material also has been shown to be able to operate at higher temperatures than tradition refractory materials, leading to an effective overall increase in energy efficiency. More information about this material and project is available at DOE’s Industrial Materials of the Future website, www.eere.energy.gov/industry/imf/.

Conclusion

The choice of refractory materials is dictated by the cost of installation and performance. In the past, it was good business sense to use low-cost, less energy-efficient refractory materials. However, with escalating energy prices, new refractory and insulation materials that were once considered expensive are now turning out to be good investments, promising significant reduction in energy bills. Manufacturers and equipment builders, therefore, need to continuously evaluate their thermal barrier options as new technologies emerge in the market and net gains from energy savings increase. Adopting the new and emerging barrier materials will assist manufacturers in lowering their energy bills and give equipment builders a competitive advantage. Maintaining an optimal thermal barrier of refractory material and insulation for process heating units provides manufacturers with that competitive advantage—saving manufacturers millions of dollars in lower energy costs, increasing productivity, lowering maintenance costs by avoiding unnecessary shutdowns, and increasing the capacity of a process heating unit.

Authors’ bios

Robert D. Naranjo, senior analyst at BCS, Incorporated, provides consulting and program support to the U.S. DOE, Office of Energy Efficiency and Renewable Energy, Industrial Technologies Program. He can be reached at rnaranjo@bcs-hq.com.

William T. Choate, group manager/senior technical staff, BCS, Incorporated, is a hands-on chemical engineer with over 30 years of engineering, R&D management, and consulting experience in the private and public sectors. He can be reached at Bchoate@bcs-hq.com.

Ehr Ping HuangFu is a technology manager at the U.S. DOE who oversees metalcasting and aluminum research under the Industrial Technologies Program. Contact him at
Ehr-Ping.Huangfu@ee.doe.gov.

References

1 Kwon, Ji Yea; Choate, William T., Naranjo, Robert D., “Advanced Melting Technologies: Energy Saving Concepts and Opportunities for the Metal Casting Industry,” U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Industrial Technologies Program, Metal Casting Portfolio, November 2005: 7.

2 U.S. Environmental Protection Agency, “Global Warming ? Actions: Wise Rules” http://yosemite.epa.gov/oar%5Cglobalwarming.nsf/content/ActionsIndustryWiseRules.html#heat

3 Hemrick, James G., Hayden, H. Wayne, Angelini, Peter, Moore, Robert E., Headrick, William L., “Refractories for Industrial Processing: Opportunities for Improved Energy Efficiency,” U.S. Department of Energy, Office of Renewable Energy and Energy Efficiency, Industrial Technologies Program, January 2005: 5.

4 Based on the 2005 average industrial price of $8.47 per cubic feet. Source: U.S. Department of Energy, Energy Information Administration, http://tonto.eia.doe.gov/dnav/ng/ng_pri_sum_dcu_nus_a.htm

5 Industrial Technologies Program, Best Practices Subprogram, “Process Heating: Metal and Glass Manufacturers Reduce Cost by Increasing Energy Efficiency in Process Heat,” Fact Sheet, May 2004.

6 Rase, Howard F., and M. H. Barrow. Project Engineering of Process Plants. New York: John Wiley and Sons, Inc., 1957: 484.

7 Schwam, David, and Wallace, Jack F., “Melting of Aluminum Alloys State of the Art and Future Trends,” Case Western Reserve University, August 2005: 15.

8 Olchawski, James,SPECIAL REPORT: Saving Energy with Microporous Insulation” http://www.ceramicindustry.com/CDA/Archives/a048ec0cf4ac7010VgnVCM100000f932a8c0

9 Schwam, David, and Wallace, Jack F., “Melting of Aluminum Alloys State of the Art and Future Trends,” Case Western Reserve University, August 2005: 16.

10 A program that provides financial and technical support to inventors and businesses for promising energy-saving concepts and technologies.

11 U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE), Office of Industrial Technologies, Invention and Innovations, “Monolithic Refractory Material: High-Temperature Refractory Ceramic Saves Energy,” Fact Sheet, May 1999.

12 U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE), Industrial Technologies Program (ITP), Industrial Materials of the Future (IMF), Advanced Nanoporous Composite Materials for Industrial Heating Applications: Improve Energy Efficiency With New Low Cost Nanostructured Refractory Material,” Fact Sheet, May 2006.

Figure 1

Figure One

Figure 2

Figure Two

Beginning in the mid 1970s, individuals within the regulated community, and even within the federal government, began advocating the use of negotiated rulemaking as a more efficient, sensible alternative to the traditional “notice and comment” procedures typically followed by federal agencies in the development of regulations. In the 1980s, the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA) began using a negotiated process as an aid to the development of certain regulations. By 1990, Congress formally endorsed negotiated rulemaking with the passage of the federal Negotiated Rulemaking Act, under the strong support of the Clinton Administration. Advocates of negotiated rulemaking tend to identify two primary benefits: reduced time to final rulemaking and a decrease in litigation over pending regulations. It is in this context that leading U.S. manufacturers in the refractory ceramic fiber (RCF) industry formed the Refractory Ceramic Fibers Coalition (RCFC), focused on workplace exposures to airborne fiber.

Background

Before discussing the RCF industry’s voluntary workplace protection program, it is useful to provide some background on the industry and the health issues associated with workplace exposure to RCFs. RCFs are advanced fibrous insulation materials used in various industrial, automotive, aerospace, and military applications requiring low weight and high heat resistance. RCFs are members of a group of materials commonly referred to as synthetic vitreous fibers (SVFs), a family of products that includes fiberglass (home insulation), rock wool, slag wool, and specialty high-temperature industrial fibers. RCFs are produced by melting and fiberizing a mixture of alumina (Al203) and silica (Si203). The combination of low density, low volumetric heat capacity and low thermal conductivity make this a valuable insulating material, particularly for high-temperature industrial applications. Invented in the 1940s, RCF sales grew substantially in the 1970s during the energy crises. RCFs now account for approximately one to two percent of the total SVF market in the United States.

RCFs contain individual fibers having a length weighted geometric mean diameter of less than 3 mm (respirable to humans); therefore, there is a potential risk associated with workplace exposures to airborne fibers. In the early 1990s, laboratory studies indicated that RCFs could cause cancer in animals exposed at doses hundreds of times higher than current average occupational workplace exposures. These studies are subject to extensive scientific debate, due to evidence of flaws in the study design. In contrast to animal studies, epidemiological investigations of persons actually exposed to RCF in the workplace for long periods of time did not show significant adverse health effects associated with RCF exposure. Nevertheless, the animal studies suggested that exposure to extremely high levels of RCF may pose some risk; therefore, RCF producers decided to take action to manage the potential human health risks and not to wait for more definitive scientific results or regulatory action.

Product Stewardship

The RCFC was formed in the late 1980s. The coalition and its member companies (Thermal Ceramics, Unifrax Corporation, and Vesuvius USA) account for more than ninety-five percent of all RCF currently produced in the United States. In 1991, the RCFC implemented a comprehensive, multifaceted Product Stewardship Program (PSP), designed to assist RCF manufacturers and end users in the evaluation, control, and reduction of workplace exposures to airborne fiber. The RCF PSP encompasses the entire “cradle-to-grave” life cycle of RCF-containing products, from primary manufacture, processing, and use through the ultimate disposal of RCF-containing materials.

The objective of the PSP for RCF was to help employees better manage the risk associated with workplace exposure to airborne RCFs. The PSP established and communicated proper material handling practices, engineering control technologies, personal protective equipment recommendations, regulatory information, and information pertaining to workplace exposure assessments. The PSP for RCF consists of the following seven key elements:

Health effects research:

This element includes epidemiological studies on worker cohorts, animal studies, and studies designed to learn more about fiber dosimetry—the relationships between exposure concentration, deposition in the deep lung, clearance of retained fibers, and possible biological effects. The information gained as part of this research is useful in conducting quantitative risk analyses.

Exposure assessments:

This element encompasses studies designed to estimate the exposed cohort (approximately 30,000 workers in the United States, and the same number in Europe) and to assess (qualitatively or semi-quantitatively) possible life-cycle exposure to various RCF-containing products. Based on this exposure “screening” analysis, products or product applications are identified and flagged for further quantitative analysis and executive review. The objective is to evaluate potential control options and, if these appear unsatisfactory, product substitutes.

Study of workplace controls:

This element (in concert with workplace monitoring) includes studies designed to identify and evaluate ways to reduce exposure (including engineering controls, workplace practices, and, for some jobs, use of personal protective equipment). Controls evaluated as part of this program element included use of general and local exhaust ventilation (LEV), baghouses, humidity control, isolation of high-exposure jobs, reengineering of dusty jobs, and development of improved work practices.

Workplace monitoring:

This element involves monitoring of task length average (TLA) and time-weighted average (TWA) fiber concentrations at facilities operated by RCF producers (termed “internal samples”) and their customers (termed “external samples”). The objectives are to identify high-exposure jobs, characterize exposure by industrial segment, detect time trends in occupational exposure, and use this information for benchmarking purposes. The study of workplace controls and workplace monitoring enabled the industry to examine the feasibility of meeting various occupational exposure limits (OELs). The industry adopted a series of progressively more stringent recommended exposure guidelines (REGs) based on monitoring results and demonstrated feasibility.

Product research:

This element includes reach initiatives designed to find ways to reduce potential exposure through product design (e.g., encapsulation), fiber dimension, and the development of high-temperature fibers that are less biopersistent.

Special studies:

This catchall category covers relevant projects that did not fit neatly into the elements listed above. Included in this category were studies to measure stack emissions, possible substitutes, and waste generation rates. Among other things, these studies were designed to anticipate regulatory needs and provide relevant information to decision makers.

Communications:

This element includes the use of various media to communicate the results of the PSP to employees, customers, unions, and government regulators. The industry developed videotapes, comic books (written in various languages), consumer telephone “hot lines” to provide time-critical information, material safety data sheets (MSDS), and (later) information posted on Internet sites maintained by the RCF producers, RCFC, and the European Ceramic Fibres Industry Association (ECFIA). A key part of the communications program was a customer outreach program. RCF producers helped form user groups (e.g., the RCF Vacuum Formers Association) to share relevant stewardship information. Another key component of the communications program was the decision to publish all scientific information in the peer-reviewed literature.

PSP 2002—Workplace Monitoring

On February 11, 2002, the RCFC member companies entered into a voluntary workplace protection agreement with OSHA. This program, entitled PSP 2002, is a highly acclaimed, multi-faceted strategic risk management initiative designed specifically to reduce workplace exposures to airborne RCFs. PSP 2002 has been developed for use wherever RCF is manufactured, fabricated, installed, or removed, as well as in other occupational settings where workers have potential exposure to airborne RCF. RCFC reports its progress on the implementation of PSP 2002 to OSHA annually. Although there are no reporting provisions for non-RCFC member companies, all RCF users were encouraged to adopt the key elements of PSP 2002. OSHA made a special point to commend RCFC for its commitment to assisting its industrial customers with managing workplace exposures to RCF. In a letter to RCFC dated February 11, 2002, the assistant secretary of labor for occupational safety and health, John Henshaw, commented, “OSHA believes that the commitment RCFC has made in developing the Program form an important step towards further improving worker protection. The 0.5 fiber/cc exposure guideline recommended in the Program, the specific engineering controls and work practices detailed in the Program, and the recognition that respiratory protection is appropriate in certain operations, will help reduce exposures of the workers who handle RCF products daily.”

Workplace industrial hygiene sampling is conducted using a stratified random sampling plan (SRSP) with stated data quality objectives (DQO). The purposes are to: 1) identify differences in workplace exposures among workers in different functional job categories (FJCs), 2) track time trends in average workplace concentrations, 3) learn if there are systematic differences between exposures of workers employed in identical FJCs by RCF producers compared to those employed by customers, 4) develop “before and after” snapshots to evaluate the effectiveness of intervention and monitoring, and 5) investigate whether there are systematic differences among customers in different industrial sectors. Because one major purpose of these samples is to track exposure changes over time, the data are referred to as historical baseline samples. All industrial hygiene workplace exposure monitoring for RCF is performed by industrial hygienists employed by RCFC member companies following the strict protocol outlined within a Quality Assurance Project Plan (QAPP). Sample collection and laboratory analysis is performed following the National Institute of Occupational Safety and Health (NIOSH) Method 7400 (NIOSH, 1989).

In addition to samples included as part of the SRSP, other samples, termed special emphasis samples (SES), are collected. These include personal monitoring samples, area samples, use of various analytical techniques (e.g., transmission electron microscopy, phase contrast microscopy, scanning electron microscopy) or Tyndall beam technology designed to better understand emission points and design engineered controls (dust collection). Special emphasis sample data are not pooled with historical baseline sample data for developing exposure estimates.

In 2005, RCFC member companies met all deliverables identified under PSP 2002. RCF workplace concentrations decreased in customer facilities by 28.5 percent, while remaining approximately the same (up four percent) for RCF manufacturers. The numbers of samples collected in 2005 actually exceeded the goals of PSP 2002 in each corresponding category, with overages of four percent for internal samples, sixteen percent for external “customer” samples, and thirty-six percent for special emphasis samples. Overall, weighted average airborne fiber concentrations have decreased substantially since 1990 (the inception of PSP for RCF).

In summary, 2005 was another successful year for PSP 2002 as all deliverables were provided and weighted average exposures remained approximately the same for RCF producers or decreased for customers. Looking ahead as the PSP 2002 program enters its fifth and final year, the RCFC is in the process of preparing a proposal for OSHA, NIOSH, and the EPA for the continuation of PSP for RCF. The industry intends to continue the PSP and look for ways to enhance its impact on reducing workplace exposures to airborne fiber.

Conclusion

Product stewardship for RCF has a 15-year proven track record of success. This comprehensive risk management strategy has given birth to new control technologies, modified handling practices, new fiber chemistries, a better understanding of workplace exposures to airborne fiber, and scientific discoveries that advance our understanding of the potential biological impacts of fiber exposure. Certainly, PSP for RCF can best be described as a work in progress, as future developments help us understand how to minimize the potential risks associated with workplace exposures to airborne RCF.

With today’s high energy prices and improving markets for mechanical insulation products, design engineers and facility owners have greater interest in reducing energy consumption by increasing energy efficiency. Further, facility owners are under pressure to do so in ways that reduce craft labor hours or use lower-cost craft labor. In looking for cost efficiencies, there is growing interest in the use of thermal
insulating coatings (TICs). If energy costs remain high, or even increase, that interest is likely to grow.

What Are Insulating Coatings?

TICs are not new. I first heard of them about 10 years ago, and they have been commercially available longer than that. One TIC manufacturer defines them as follows:

…A true insulating coating is one that produces temperature differentials across its surface, no matter where the coating is placed (i.e., to the hot/cold surface or inside or outside).

That may be true, but a temperature differential can be produced by almost any a material that has some thickness and thermal conductivity-and not all of those materials would necessarily be considered thermal insulation. One typically reliable source for definitions like this is ASTM. While ASTM does not have a definition for "thermal insulating coating," ASTM C168 (the standard for insulation terminology) includes a definition for "thermal insulation."

thermal insulation (n): a material or assembly of materials used to provide resistance to heat flow.

Further on in C168, there is a definition for "coating."

coating (n): a liquid or semi-liquid that dries or cures to form a protective finish, suitable for application to thermal insulation or other surfaces in thickness of 30 mils (0.76 mm) or less, per coat.

Combining these two definitions-allowing that a "thermal insulating coating" does not have to cover thermal insulation but can act as thermal insulation alone-yields a proposed definition for a TIC:

thermal insulating coating (n): a liquid or semi liquid, suitable for application to a surface in a thickness of 30 mils (0.75 mm) or less per coat, that dries or cures to simultaneously form a protective finish and provide resistance to heat flow.

Since Insulation Outlook is an insulation magazine (and this author’s expertise is in thermal insulation), the rest of this article will discuss TICs as thermal insulation materials, rather than coatings. An evaluation of TICs’ role as coatings will be left to the coating experts. Further, because this magazine addresses mechanical insulation and its applications, this discussion is limited to TICs in mechanical insulating roles, not in insulating building envelopes.

An Early Study of Insulating Coatings

This author first conducted a study of TICs as a form of thermal insulation about eight years ago when working for a former employer. I learned that there are several different manufacturers in North America and that TICs contain a granular material, referred to by some at that time as ceramic beads. I also learned that TICs can be applied by brush or spray; and, in general, the coatings were rated for maximum-use temperatures of 500°F

One supplier sent me a sample in the form of a soup can that was coated on its sides with about one-quarter inch of dry insulating coating. The bottom of the can was not coated. The instructions were to pour hot water into the can while holding it by the sides, and to notice that I could continue to hold the can without getting burned. The instructions noted that a quick touch of the bottom of the can would show how hot the contents were. I followed the instructions and did, indeed, notice that I could hold the coated soup can indefinitely. While not scientific proof, it certainly demonstrated that the TIC could be an effective insulator, providing personnel protection from hot water.

I also conducted some thermal analyses using the ASTM C680 computer code and concluded that there were some definite thermal benefits to be achieved with a thickness of one-eighth to one-quarter of an inch, particularly on relatively mild temperature surfaces of up to 250°F or so. However, it was clear that these thicknesses would require multiple coats, at about 20 ml/coat, so any potential labor savings from using the TICs were significantly reduced. I also noticed that with just several coats, heat loss could be reduced by at least fifty percent compared to the bare surface. Significant heat loss reduction could be achieved on surfaces up to 500°F (although it should be remembered that conventional insulation typically provides at least a ninety-percent heat loss reduction with a thickness of just one inch).

What’s on the Market Today?

For this article, I reviewed literature and technical information available on the Internet, as well as other sources. One company’s website contains some useful technical information for a product they classify as a ceramic coating, since it contains ceramic beads. It gives the thermal conductivity as 0.097 W/m-°K (0.676 Btu-in/hr-ft2 –°F) at 23°C (73.4°F). By comparison, the thermal conductivity of calcium silicate, ASTM C533 Type I Block, is 0.059W/m-°K (0.41 Btu-in/hr-ft2 –°F) at 38°C (100°F), a forty-percent lower value at a higher mean temperature. It appears that this particular ceramic insulating coating is not as good an insulator as calcium silicate. Still, the thermal conductivity could certainly meet the definition proposed above for a "thermal insulating coating," particularly if it were applied in several coats. The thermal conductivity appears to be low enough to act as an insulation material with sufficient thickness.

I was frustrated in my attempts to obtain more detailed technical information that a designer could use to design an insulation system-e.g., several mean temperature-thermal conductivity data pairs and a surface emittance. Typical of the problems I encountered searching for such technical information, one manufacturer referred to a test to determine thermal conductivity from heat source exposure of 212°F, noting the following:

…finding showed that heat transfer was substantially reduced in the testing situation from 367.20 BTUs measured on bare metal to 3.99 BTU on the metal surface [coated with the product].

Without providing the thermal conductivities derived from these tests, this statement leaves the reader with more questions than answers.

  • What was the hot surface temperature?
  • What was the cold side surface temperature?
  • What was the TIC thickness?
  • What test procedure was used?

The literature for this particular product gives a "K Factor Insulating Rating" of 0.019 W/m-°K (0.132 Btu-in/hr-ft2-°F). This value is about one-fifth that of the other TIC mentioned above, which is difficult to believe.

Literature from another company, for whose product I could find no technical information, speaks generally to the company’s history and skilled experts who will help designers specify the company’s coatings. While I do not doubt that the company has technical experts, it would be helpful for them to provide potential users of their TIC products sufficient technical information to do a design. At a minimum, this information would include several thermal conductivities at corresponding mean temperatures. As an alternative, the literature should provide thermal conductance values at several service temperatures for several thicknesses, as well as surface emittance. An insulation designer cannot do a design without such technical information.

In terms of labor required for installation, one supplier reported that a team of three painters can apply 3,000 square feet of a 20-mil TIC coat per hour, or 1,000 square feet per craft labor hour. This is impressive until one considers how much labor may be needed to add all the necessary coats. To apply a total thickness of one-eighth of an inch, which would require about six coats, the expected productivity would be about 167 square feet per craft labor hour. A one-quarter-inch thickness, which would require about twelve coats, would result in a labor productivity of about 83 square feet per hour. These productivity calculations, and the costs associated with those productivities based on the labor rate for local painters, should be compared to those for conventional insulation (which is a subject beyond the scope of this article).

What Do Engineers and Designers Need to Design an Insulation System?

Several TIC manufacturers mentioned their materials’ benefiting from reflective, low-emittance surfaces and stated that their performance is not predictable using standard calculation methodologies. For a design engineer or other designer of a thermal insulation system, however, it is critical to have this information. Typically, to do a thermal design (i.e., to determine the necessary insulation thickness), the designer needs a thermal conductivity curve (or a minimum of three mean temperatures minus thermal conductivity pairs) and the available thicknesses. To ensure that the correct application is used, the designer also should have the maximum and minimum use temperatures. Finally, if the insulation is to be left unjacketed, which should be the case with TICs, the designer would need the surface emittance.

With this information, a designer should be able to determine the required insulation thickness for a particular orientation, pipe size (if applicable), pipe or equipment surface temperature, ambient temperature, and wind speed. With conventional insulation, the designer might use a tool such as 3E Plus® (available for free download from the North American Insulation Manufacturers Association at www.pipeinsulation.org). Regardless of the choice of design tool, thermal conductivity data and surface emittance values will be needed to design for a hot or cold surface application.

For below-ambient application, in addition to the information noted above, the designer would need the vapor permeance and moisture absorption of the material. The designer needs to be assured that the design will prevent moisture migration into the TIC and then onto the chilled surface.

Where Are Thermal Insulating Coatings Best Used?

To determine where TICs might best be used, this author conducted some heat loss analyses using 3E Plus and thermal conductivity data provided by one of the manufacturers. To give TICs the benefit of the doubt, I used a constant thermal conductivity of 0.019 W/m-°K (0.132 Btu-in/hr-ft2-°F), the lower of the two values mentioned above. I do not have thermal conductivity values at temperatures other than an assumed 75°F mean, so I assumed that the TIC’s thermal conductivity increases one percent for every 10°F increase in mean temperature, which is approximately true for calcium silicate. Further, for personnel protection, I assumed a maximum allowable surface temperature of 160°F rather than the traditional 140°F because the latter assumes metal jacketing (not unjacketed) insulation material. As we know, hot metal has a high contact temperature, meaning that at a given temperature, the heat is transferred to the human body more quickly than from a material with a low contact temperature. Finally, I assumed that the TIC has a surface emittance of 0.9, which makes it easier to insulate for personnel protection than using a low surface emittance. I believe this is likely a good value to use, although it seems to contradict some of the TIC manufacturers who attribute their product’s performance to a highly reflective surface.

With these assumptions, what did my calculations show for personnel protection? Using a TIC thickness in the range of 0.20 inches (i.e., ten coats at 20 mils per coat) on a 350°F eight-inch nominal pipe size (NPS) pipe in a 90°F ambient with 0 mph wind, I could obtain a surface temperature less than 160°F. Thus, with a sufficient number of coats on a 350°F pipe, personnel protection could be achieved.

I also evaluated the TIC for condensation control on a below-ambient surface and concluded that on a 60°F eight-inch NPS pipe in a 90°F eighty-five percent relative humidity ambient with 0 mph wind, I could prevent condensation with a 0.44-inch total thickness (i.e., twenty-two coats at 20 mil per coat). For a TIC to be effective for condensation control on a 50°F line, however, there probably would have to be a minimum of five-eighths inch, or thirty coats. Therefore, this thickness, for a TIC in a condensation control application, might be prohibitive from a total cost of labor perspective.

One potential advantage of a TIC over conventional insulation may be for use on a 250°F or lower surface where corrosion under insulation (CUI) might be a problem with conventional insulation. First of all, only several coats would be required (probably six to eight) to provide a surface temperature of less than 160°F Assuming that the TIC can be shown to be an effective weather barrier, it may well have the necessary insulating value to provide personnel protection and simultaneously prevent CUI on surfaces up to about 250°F. Conventional insulation can have difficulty with such surfaces on outdoor applications because there is insufficient temperature to drive off any water that leaks through the jacketing and into the insulation.

Additionally, if the designer has a below-ambient surface that needs insulation for condensation control, and that surface is difficult to insulate by conventional means, then a TIC may well prove to be the most cost-effective means of insulating that surface, as long as its temperature is above 60°F or so (i.e., not too cold). The designer needs to evaluate the total cost of both, however, including the labor required to apply the necessary number of TIC coats to provide condensation control. Only then will he or she know which insulation solution-conventional insulation or a TIC-is more cost-effective.

What Standards Activities Are Planned?

The ASTM Committee on Thermal Insulation, C16, will be holding a first Task Group meeting at its next semi-annual meeting in Toronto, Ontario, Canada, in late April of this year. The Task Group will focus on developing a test method for TICs-particularly one for use in mechanical applications. This Task Group meeting should prove to be valuable because it will give interested ASTM members the opportunity to evaluate the testing needs for TICs and the existing ASTM methods’ ability to meet those needs.

In terms of existing test methods, ASTM C177, the guarded hot plate apparatus, is typically used to determine the thermal transmission properties of mechanical insulation materials. It may not be ideally suited for evaluating the thermal performance of a thin TIC since it has a thickness of only about one-eighth to one-quarter inch and is sandwiched between plates. With no surface exposed to the ambient, it is precluded from reaping any particular surface radiation benefits that this novel type of insulation might have.

The pipe test method, ASTM C335, could be ideally suited to the task because there is a surface exposed to the ambient and it simply measures the heat required to maintain a constant temperature of the simulated pipe. This test method by itself does not take into account the thickness of the material, and it does not need to. What you measure is what you get. Results can be expressed as thermal transmittance, thermal conductance, or thermal conductivity, depending on how one cranks the numbers. Since an appropriate test method already exists, there is arguably no need to develop a new test method for evaluating TICs’ thermal performance. However, I will leave that recommendation to this new ASTM Task Group.

What Is Needed From the TIC Manufacturers

For their products to be specified for use in mechanical applications, TIC manufacturers must provide basic design information on the products. Further, any TIC technical information must be backed up by certified test reports available upon request by the owner or the architectural/engineering (A/E) firm doing the design. Design engineers require detailed engineering design information on the products they intend to use. Design professionals, whether working for a facility owner or for an A/E firm, cannot simply delegate an insulation design to a material manufacturer. Design engineers are paid to do engineering design. They, and their firm, are legally liable for the accuracy of that design. To control the design output, they must control both the design input and the computational methodology.

If some TIC manufacturers are concerned that the use of thermal conductivity for their products is misleading, they must provide thermal transmittance data for different thicknesses at different service temperatures. I believe this data can be accurately obtained using ASTM C335 for above-ambient temperatures. Greater openness on the part of TIC manufacturers about their products’ performance will translate to greater respect from the design community and owner/operators of industrial facilities. From this openness and respect-and demonstrated thermal performance-acceptance of TIC products will follow, and specifications can then include TICs for suitable applications.

Acknowledgements: The author spoke to a number of specifying engineers to get their opinions and perspective for this article. He is grateful for their assistance.

Editor’s note: The opinions and information shared by the author in the preceding article are his and have not been confirmed by the NIA.

Figure 1

Nanotechnology developed thermal insulating coating over pipe.
Picture by Industrial Nanotech, INC.

Figure 2

Nanotechnology developed thermal insulating coating over textile mill.
Picture by Industrial Nanotech, INC.

With today’s high energy prices and improving markets for mechanical insulation products, design engineers and facility owners have greater interest in reducing energy consumption by increasing energy efficiency. Further, facility owners are under pressure to do so in ways that reduce craft labor hours or use lower-cost craft labor. In looking for cost efficiencies, there is growing interest in the use of thermal insulating coatings (TICs). If energy costs remain high, or even increase, that interest is likely to grow.

What Are Insulating Coatings?

TICs are not new. I first heard of them about 10 years ago, and they have been commercially available longer than that. One TIC manufacturer defines them as follows: “A true insulating coating is one that produces temperature differentials across its surface, no matter where the coating is placed (i.e., to the hot/cold surface or inside or outside).”

That may be true, but a temperature differential can be produced by almost any a material that has some thickness and thermal conductivity—and not all of those materials would necessarily be considered thermal insulation. One typically reliable source for definitions like this is the American Society for Testing and Materials (ASTM). While ASTM does not have a definition for “thermal insulating coating,” ASTM C168 (the standard for insulation terminology) includes the following definition for thermal insulation:

thermal insulation (n): a material or assembly of materials used to provide resistance to heat flow

Further on in C168 is the following definition for coating:

coating (n): a liquid or semi-liquid that dries or cures to form a protective finish, suitable for application to thermal insulation or other surfaces in thickness of 30 mils (0.76 mm) or less, per coat

Combining these two definitions—allowing that a “thermal insulating coating” does not have to cover thermal insulation but can act as thermal insulation alone—yields a proposed definition for a TIC:

thermal insulating coating (n): a liquid or semi-liquid, suitable for application to a surface in a thickness of 30 mils (0.75 mm) or less per coat, that dries or cures to simultaneously form a protective finish and provide resistance to heat flow

Since Insulation Outlook is an insulation magazine (and this author’s expertise is in thermal insulation), the rest of this article will discuss TICs as thermal insulation materials, rather than coatings. An evaluation of TICs’ role as coatings will be left to the coating experts. Further, because this magazine addresses mechanical insulation and its applications, this discussion is limited to TICs in mechanical insulating roles, not in insulating building envelopes.

An Early Study of Insulating Coatings

This author first conducted a study of TICs as a form of thermal insulation about eight years ago when working for a former employer. I learned that there are several different manufacturers in North America and that TICs contain a granular material, referred to by some at that time as ceramic beads. I also learned that TICs can be applied by brush or spray; and, in general, the coatings were rated for maximum-use temperatures of 500°F.

One supplier sent me a sample in the form of a soup can that was coated on its sides with about one-quarter inch of dry insulating coating. The bottom of the can was not coated. The instructions were to pour hot water into the can while holding it by the sides, and to notice that I could continue to hold the can without getting burned. The instructions noted that a quick touch of the bottom of the can would show how hot the contents were. I followed the instructions and did, indeed, notice that I could hold the coated soup can indefinitely. While not scientific proof, it certainly demonstrated that the TIC could be an effective insulator, providing personnel protection from hot water.

I also conducted some thermal analyses using the ASTM C680 computer code and concluded that there were some definite thermal benefits to be achieved with a thickness of one-eighth to one-quarter of an inch, particularly on relatively mild temperature surfaces of up to 250°F or so. However, it was clear that these thicknesses would require multiple coats, at about 20 ml/coat, so any potential labor savings from using the TICs were significantly reduced. I also noticed that with just several coats, heat loss could be reduced by at least fifty percent compared to the bare surface. Significant heat loss reduction could be achieved on surfaces up to 500°F (although it should be remembered that conventional insulation typically provides at least a ninety-percent heat loss reduction with a thickness of just one inch).

What’s on the Market Today?

For this article, I reviewed literature and technical information available on the Internet, as well as other sources. One company’s website contains some useful technical information for a product they classify as a ceramic coating, since it contains ceramic beads. It gives the thermal conductivity as 0.097 W/m-°K (0.676 Btu-in/hr-ft2 -°F) at 23°C (73.4°F). By comparison, the thermal conductivity of calcium silicate, ASTM C533 Type I Block, is 0.059W/m-°K (0.41 Btu-in/hr-ft2 -°F) at 38°C (100°F), a forty-percent lower value at a higher mean temperature. It appears that this particular ceramic insulating coating is not as good an insulator as calcium silicate. Still, the thermal conductivity could certainly meet the definition proposed above for a “thermal insulating coating,” particularly if it were applied in several coats. The thermal conductivity appears to be low enough to act as an insulation material with sufficient thickness.

I was frustrated in my attempts to obtain more detailed technical information that a designer could use to design an insulation system—e.g., several mean temperature-thermal conductivity data pairs and a surface emittance. Typical of the problems I encountered searching for such technical information, one manufacturer referred to a test to determine thermal conductivity from heat source exposure of 212°F, noting the following: “…finding showed that heat transfer was substantially reduced in the testing situation from 367.20 BTUs measured on bare metal to 3.99 BTU on the metal surface [coated with the product].”

Without providing the thermal conductivities derived from these tests, this statement leaves the reader with more questions than answers, including the following:

  • What was the hot surface temperature?
  • What was the cold side surface temperature?
  • What was the TIC thickness?
  • What test procedure was used?

The literature for this particular product gives a “K Factor Insulating Rating” of 0.019 W/m-°K (0.132 Btu-in/hr-ft2-°F). This value is about one-fifth that of the other TIC mentioned above, which is difficult to believe.

Literature from another company, for whose product I could find no technical information, speaks generally to the company’s history and skilled experts who will help designers specify the company’s coatings. While I do not doubt that the company has technical experts, it would be helpful for them to provide potential users of their TIC products sufficient technical information to do a design. At a minimum, this information would include several thermal conductivities at corresponding mean temperatures. As an alternative, the literature should provide thermal conductance values at several service temperatures for several thicknesses, as well as surface emittance. An insulation designer cannot do a design without such technical information.

In terms of labor required for installation, one supplier reported that a team of three painters can apply 3,000 square feet of a 20-mil TIC coat per hour, or 1,000 square feet per craft labor hour. This is impressive until one considers how much labor may be needed to add all the necessary coats. To apply a total thickness of one-eighth of an inch, which would require about six coats, the expected productivity would be about 167 square feet per craft labor hour. A one-quarter-inch thickness, which would require about twelve coats, would result in a labor productivity of about 83 square feet per hour. These productivity calculations, and the costs associated with those productivities based on the labor rate for local painters, should be compared to those for conventional insulation (which is a subject beyond the scope of this article).

What Do Engineers and Designers Need to Design an Insulation System?

Several TIC manufacturers mentioned their materials’ benefiting from reflective, low-emittance surfaces and stated that their performance is not predictable using standard calculation methodologies. For a design engineer or other designer of a thermal insulation system, however, it is critical to have this information. Typically, to do a thermal design (i.e., to determine the necessary insulation thickness), the designer needs a thermal conductivity curve (or a minimum of three mean temperatures minus thermal conductivity pairs) and the available thicknesses. To ensure that the correct application is used, the designer also should have the maximum and minimum use temperatures. Finally, if the insulation is to be left unjacketed, which should be the case with TICs, the designer would need the surface emittance.

With this information, a designer should be able to determine the required insulation thickness for a particular orientation, pipe size (if applicable), pipe or equipment surface temperature, ambient temperature, and wind speed. With conventional insulation, the designer might use a tool such as 3E Plus® (available for free download from the North American Insulation Manufacturers Association at www.pipeinsulation.org). Regardless of the choice of design tool, thermal conductivity data and surface emittance values will be needed to design for a hot or cold surface application.

For below-ambient application, in addition to the information noted above, the designer would need the vapor permeance and moisture absorption of the material. The designer needs to be assured that the design will prevent moisture migration into the TIC and then onto the chilled surface.

Where Are Thermal Insulating Coatings Best Used?

To determine where TICs might best be used, this author conducted some heat loss analyses using 3E Plus and thermal conductivity data provided by one of the manufacturers. To give TICs the benefit of the doubt, I used a constant thermal conductivity of 0.019 W/m-°K (0.132 Btu-in/hr-ft2-°F), the lower of the two values mentioned above. I do not have thermal conductivity values at temperatures other than an assumed 75°F mean, so I assumed that the TIC’s thermal conductivity increases one percent for every 10°F increase in mean temperature, which is approximately true for calcium silicate. Further, for personnel protection, I assumed a maximum allowable surface temperature of 160°F rather than the traditional 140°F because the latter assumes metal jacketing (not unjacketed) insulation material. As we know, hot metal has a high contact temperature, meaning that at a given temperature, the heat is transferred to the human body more quickly than from a material with a low contact temperature. Finally, I assumed that the TIC has a surface emittance of 0.9, which makes it easier to insulate for personnel protection than using a low surface emittance. I believe this is likely a good value to use, although it seems to contradict some of the TIC manufacturers who attribute their product’s performance to a highly reflective surface.

With these assumptions, what did my calculations show for personnel protection? Using a TIC thickness in the range of 0.20 inches (i.e., ten coats at 20 mils per coat) on a 350°F eight-inch nominal pipe size (NPS) pipe in a 90°F ambient with 0 mph wind, I could obtain a surface temperature less than 160°F. Thus, with a sufficient number of coats on a 350°F pipe, personnel protection could be achieved.

I also evaluated the TIC for condensation control on a below-ambient surface and concluded that on a 60°F eight-inch NPS pipe in a 90°F eighty-five percent relative humidity ambient with 0 mph wind, I could prevent condensation with a 0.44-inch total thickness (i.e., twenty-two coats at 20 mil per coat). For a TIC to be effective for condensation control on a 50°F line, however, there probably would have to be a minimum of five-eighths inch, or thirty coats. Therefore, this thickness, for a TIC in a condensation control application, might be prohibitive from a total cost of labor perspective.

One potential advantage of a TIC over conventional insulation may be for use on a 250°F or lower surface where corrosion under insulation (CUI) might be a problem with conventional insulation. First of all, only several coats would be required (probably six to eight) to provide a surface temperature of less than 160°F Assuming that the TIC can be shown to be an effective weather barrier, it may well have the necessary insulating value to provide personnel protection and simultaneously prevent CUI on surfaces up to about 250°F. Conventional insulation can have difficulty with such surfaces on outdoor applications because there is insufficient temperature to drive off any water that leaks through the jacketing and into the insulation.

Additionally, if the designer has a below-ambient surface that needs insulation for condensation control, and that surface is difficult to insulate by conventional means, then a TIC may well prove to be the most cost-effective means of insulating that surface, as long as its temperature is above 60°F or so (i.e., not too cold). The designer needs to evaluate the total cost of both, however, including the labor required to apply the necessary number of TIC coats to provide condensation control. Only then will he or she know which insulation solution—conventional insulation or a TIC—is more cost-effective.

What Standards Activities Are Planned?

The ASTM Committee on Thermal Insulation, C16, will be holding a first Task Group meeting at its next semi-annual meeting in Toronto, Ontario, Canada, in late April of this year. The Task Group will focus on developing a test method for TICs—particularly one for use in mechanical applications. This Task Group meeting should prove to be valuable because it will give interested ASTM members the opportunity to evaluate the testing needs for TICs and the existing ASTM methods’ ability to meet those needs.

In terms of existing test methods, ASTM C177, the guarded hot plate apparatus, is typically used to determine the thermal transmission properties of mechanical insulation materials. It may not be ideally suited for evaluating the thermal performance of a thin TIC since it has a thickness of only about one-eighth to one-quarter inch and is sandwiched between plates. With no surface exposed to the ambient, it is precluded from reaping any particular surface radiation benefits that this novel type of insulation might have.

The pipe test method, ASTM C335, could be ideally suited to the task because there is a surface exposed to the ambient and it simply measures the heat required to maintain a constant temperature of the simulated pipe. This test method by itself does not take into account the thickness of the material, and it does not need to. What you measure is what you get. Results can be expressed as thermal transmittance, thermal conductance, or thermal conductivity, depending on how one cranks the numbers. Since an appropriate test method already exists, there is arguably no need to develop a new test method for evaluating TICs’ thermal performance. However, I will leave that recommendation to this new ASTM Task Group.

What Is Needed From the TIC Manufacturers

For their products to be specified for use in mechanical applications, TIC manufacturers must provide basic design information on the products. Further, any TIC technical information must be backed up by certified test reports available upon request by the owner or the architectural/engineering (A/E) firm doing the design. Design engineers require detailed engineering design information on the products they intend to use. Design professionals, whether working for a facility owner or for an A/E firm, cannot simply delegate an insulation design to a material manufacturer. Design engineers are paid to do engineering design. They, and their firm, are legally liable for the accuracy of that design. To control the design output, they must control both the design input and the computational methodology.

If some TIC manufacturers are concerned that the use of thermal conductivity for their products is misleading, they must provide thermal transmittance data for different thicknesses at different service temperatures. I believe this data can be accurately obtained using ASTM C335 for above-ambient temperatures. Greater openness on the part of TIC manufacturers about their products’ performance will translate to greater respect from the design community and owner/operators of industrial facilities. From this openness and respect—and demonstrated thermal performance—acceptance of TIC products will follow, and specifications can then include TICs for suitable applications.

Acknowledgements: The author spoke to a number of specifying engineers to get their opinions and perspective for this article. He is grateful for their assistance.

Note: The opinions and information shared by the author in the preceding article are his and have not been confirmed by the NIA.

Figure 1

Nanotechnology developed thermal insulating coating over pipe.

Figure 2

Nanotechnology developed thermal insulating coating over textile mill.

Energy Overview

The West Texas Intermediate (WTI) crude oil spot price is projected to average $68 per barrel in both 2006 and 2007. Summer 2006 (April 1 to September 30) regular gasoline pump prices are expected to a verage $2.76 per gallon, 39 cents higher than last year’s average of $2.37 per gallon.

Natural gas prices are projected to be lower through the rest of this year relative to the corresponding 2005 levels. The expected average for 2006 for Henry Hub spot prices of $7.74 per thousand cubic feet (mcf) is down $1.12 from the 2005 average For 2007, the Henry Hub average price will likely move back up to average $8.81 per mcf, assuming sustained high oil prices, normal weather, and continued economic expansion in the United States.

Global Petroleum Markets

Although world petroleum consumption growth has slowed because of higher prices, projected consumption growth nevertheless remains strong at 1.7 million barrels per day (bbl/d) in 2006 and 1.9 million bbl/d in 2007. Most of this consumption growth will be met by increases in non-OPEC (Organization of Petroleum Exporting Countries) production. The shortfall will be compensated for by increases in OPEC production or drawdown of inventories.

As EIA has revised historical non-OECD (Organization for Economic Cooperation and Development) demand in the International Energy Annual 2004, this new baseline has changed our forecast slightly. For 2004, non-OECD and, hence, world oil demand is assessed at about 200,000 bbl/d higher than the baseline used for the previous STEO. Changes were most noticeable in oil demand in the former Soviet Union, with demand revised lower in a few countries. This was more than made up for by a upward revision to demand in non-OECD Asia, excluding China. Going forward, growth rates in world demand based on the new baseline for 2005, 2006, and 2007 remain unchanged. Nevertheless, the higher absolute levels of demand contribute to our view of tight fundamentals throughout the forecast period.

First-quarter 2006 production data show slightly higher-than-expected non-OPEC production, but growth for the year is still expected to be 0.8 million bbl/d for 2006. This includes 0.2 million bbl/d of total liquids growth from the United States as producers continue to recover from losses suffered during the 2005 hurricane season. Outside of the United States, large new projects in 2006 and 2007 are projected to lead to production increases of almost 500,000 bbl/d in Angola, almost 400,000 bbl/d around the Caspian Sea, over 200,000 bbl/d in Canada, and almost 200,000 bbl/d in Brazil over 2006 and 2007. These new supplies will bepartially offset by declines in many mature fields, such as those in the North Sea, Mexico, and the Middle East.

US Petroleum Markets

Average domestic crude oil production is expected to increase by 157,000 bbl/d or 3.1 percent in 2006, to a level of almost 5.3 million bbl/d. For 2007, a 6.6-percent increase is expected, resulting in an average production rate of 5.6 million bbl/d for the year.

Total U.S. petroleum product consumption declined by 77,000 bbl/d, or 0.4 percent, in 2005. Higher prices and the impact of hurricanes on liquefied petroleum gases and petrochemical feedstocks drove this decline in consumption. In 2006 and 2007, petroleum consumption is projected to increase by 0.9 percent and 2.1 percent, respectively. Motor gasoline consumption, which exhibited almost no growth in 2005, is projected to grow 0.9 percent in 2006 and 1.3 percent in 2007. This pattern reflects the anticipation of continued economic growth and the stabilization of motor gasoline prices. Distillate (diesel fuel and heating oil) consumption, having increased 1.3 percent in 2005, is projected to increase 2.4 percent in 2006 and 3.1 percent in 2007. Transportation diesel fuel consumption is projected to show solid growth in 2006 and 2007 of 3.4 percent per year as the economy continues to expand. However, this year’s unusually warm weather during the first quarter resulted in a substantial decline in heating oil demand from year-ago levels, which, given NOAA’s heating degree-day outlook for this fall and winter, will limit total distillate consumption growth for all of 2006.

Refinery inputs of crude oil through the first 5 months of 2006 have averaged nearly 470,000 bbl/d (3.0 percent) below the same period last year. There are several reasons for this decline. Several refineries were still shut down or operated at reduced rates because of hurricane damage. Others pursued maintenance schedules that had been deferred from last fall, while others installed equipment to meet the new Tier 2 gasoline and ultra-low-sulfur-diesel regulations. The lower crude runs had the greatest impact on motor gasoline and distillate inventories, which fell by 23 and 20 million barrels, respectively, from the end of February through the end of April. Inventories did rebound in May, with total primary motor gasoline stocks ending May at less than 2 million barrels below the last 5-year average and distillate stocks 8 million barrels above the last 5-year average.

While significant supply uncertainties remain, some softening in the near-term gasoline balance is expected to dampen retail prices somewhat, barring new, unanticipated supply disruptions. The potential for midsummer retightening exists, however, if demand growth picks up to higher rates than currently expected or if refinery outages occur at unusual rates. Retail regular gasoline prices are projected to average about $2.60 per gallon in 2006 and 2007. Summer 2006 (April 1 to September 30) regular gasoline pump prices are expected to average $2.76 per gallon, 39 cents higher than last year’s average of $2.37 per gallon.

The transition to ultra-low-sulfur diesel (ULSD) fuel is beginning. Refiners and importers must ensure that at least 80 percent of the volume of highway diesel fuel they supply meets the new 15 parts per million (ppm) maximum sulfur limit this year, down from 500 ppm. Terminals will have until September 1, 2006, and retailers will have until October 15, 2006, to complete their transitions to ULSD. Summer 2006 retail diesel fuel prices are expected to average $2.79 per gallon, 38 cents higher than last year’s average of $2.41 per gallon.

Natural Gas Markets

In 2006, total U.S. natural gas consumption is projected to fall below 2005 levels by about 0.2 trillion cubic feet (tcf), or 0.9 percent, then increase by 0.8 tcf, or 3.8 percent, in 2007. With weak electric heating load due to the warm January and weaker expected cooling load this summer compared to 2005, the consumption of natural gas for generation of electricity is expected to increase only slightly by 0.3 percent in 2006, then increase by 0.7 percent in 2007. Also, because of an exceptionally warm January this year, residential consumption is projected to fall by 6.0 percent from 2005 levels in 2006 and then increase by 7.7 percent in 2007. Recovery in natural-gas-intensive industrial output following the 2005 hurricanes will likely contribute to growth in industrial natural gas consumption this year (2.2 percent) and in 2007 (3.6 percent).

Domestic dry natural gas production in 2005 declined by 2.7 percent, largely because of hurricane-induced infrastructure disruptions in the Gulf of Mexico. Dry natural gas production is projected to increase by 0.7 percent in 2006 and 1.2 percent in 2007. Total liquefied natural gas (LNG) net imports are expected to increase from their 2005 level of 631 billion cubic feet (bcf) to 710 bcf in 2006 and 950 bcf in 2007.

On May 26, 2006, working natural gas in storage stood at an estimated 2,243 bcf. Stocks are 477 bcf above 1 year ago and 706 bcf above the last 5-year average. The unexpectedly warm winter weather accounts for much of the current high storage level. Spot Henry Hub natural gas prices, which averaged $8.86 per mcf in 2005, are expected to fall to an average of less than $7.00 per mcf over the next few months (down from an average of $13.44 per mcf in December). Thus, barring extreme weather conditions for the rest of the year, we expect a decline in the annual average Henry Hub spot price to about $7.74 per mcf for 2006. The respite is expected to be short-lived. Concerns about potential future supply tightness and continuing pressure from high oil market prices will likely drive spot natural gas prices to just over $10.00 per mcf this coming December and January. The Henry Hub price is expected to average $8.81 per mcf in 2007.

Electricity Markets

Electricity consumption is expected to increase only slightly during 2006 (0.8 percent) in response to weak heating-related demand this past January and the lower expected cooling-related demand this summer, compared to 2005. Electricity consumption is projected to grow about 2.1 percent in 2007.

Residential electricity prices rose an estimated 5.0 percent nationally in 2005. Some of the fastest increases in household electricity prices occurred in the Northeast and North Central regions. Sharply higher prices for peaking fuels and very high summer demand for those fuels, particularly natural gas, contributed to these increases. Additional increases in delivered residential prices are likely in many regions in 2006 and 2007.

Hydroelectric generation, particularly in the Pacific region (which accounts for approximately 50 percent of hydropower), is expected to increase by nearly 10 percent from last year. May 1, 2006 estimates of snowpack in the Pacific region are significantly above the normal range with California at 180 percent of normal, Oregon at 129 percent and Washington at 122 percent.

Coal Markets

Electric power sector consumption of coal is projected to grow by about 0.7 percent in 2006 and increase by 2.2 percent in 2007. Power sector demand for coal continues to increase in response to high natural gas and oil prices. U.S. coal production is expected to grow by 2.1 percent in 2006 and by 0.2 percent in 2007. The price of coal to the electric power sector is projected to rise throughout the forecast period, although at a slower rate than in 2005. In the electric power sector, coal prices are projected to rise by an average of 6.4 percent in 2006 and by an additional 1.7 percent in 2007, increasing from $1.54 per million Btu in 2005 to $1.66 per million Btu in 2007.

This forecast has been reprinted with permission from the DOE. For expanded information, contact Tancred Lidderdale at tancred.lidderdale@eia.doe.gov or Neil Gamson at neil.gamson@ eia.doe.gov. Or, visit the EIA website at www.eia.doe.gov/steo.

dilemma: de-‘le-ma, noun-an argument presenting two equally conclusive alternatives

In the February issue of Insulation Outlook, we examined the Risk Conundrum, where we said we suspect that in many catastrophic events, the real risk-hazard times probability-is put in play when people do not follow processes or execute well. Failure to execute is the far greater risk and should be a key target for any risk management process. Also introduced was the Investigation Dilemma-the idea that finding out what really went wrong sounds easy until you try doing it. The struggle to assign responsibility sets up a dilemma of classic proportions: truth versus the consequences of being found at fault.

The Dilemma series concludes in this issue with the fourth installment of Managing Safety Dilemmas. We will examine two dilemmas that can send anyone rushing to the medicine cabinet for migraine relief: the System Dilemma and the Middle Dilemma.

The System Dilemma

"The system you have is perfectly designed to produce the results you are getting."
-W. Edwards Deming

It is very likely that no single individual in the twentieth century had a greater impact on product quality than Dr. W. Edwards Deming. Respected consultant to industry leaders in Japan and the United States, author of the Fourteen Absolutes of Quality, Deming was a towering presence in the world of manufacturing.

Except for his size-six feet, eight inches tall-nothing in Deming’s early career was exceptionally noticeable. He graduated with a doctorate in mathematical physics at the height of the Depression and then went to work for the Department of Agriculture before helping out with the 1940 U.S. Census. Soon, Deming turned his attention to the process of making things.

During World War II, Deming introduced statistical methods to improve the quality of war materials. His techniques made a huge impact. In the post-war era, he found a receptive audience among Japanese manufacturers eager to rebuild their factories and erase the perception that their products were inferior. Deming left a lasting mark in Japan: the Deming Prize for quality achievement.

By the 1980s, U.S. manufacturers were finally ready to listen to his ideas about improving their product quality. To a practicing statistician, as Deming was, the world of performance takes on the shape of a bell curve. The best and the worst-and everything clumped in the middle-are not all that different, in a statistical sense. They are all products of the same system.

What is that system? A system is best thought of as "the complex relationships between related components." The idea comes from the natural world (think eco-systems) recognizing that even small changes can have large effects on the world. When applied to the manufacturing world, the system is all the factors in play: raw materials, production equipment, methods and processes, people, and all that that implies.

Deming argued that if you wanted better results, you had to change the system that produced those results. It was crucial to stop blaming the people making poor quality products and start changing the entire system that produced those products. The argument carried the day. Those engaged in making things-from consumer electronics to chemicals and paint to cars and trucks-are starting applying statistical methods to their production. For example: change the process, move the mean, reduce variability, and tighten up the distribution curve. The results were nothing short of astounding. Product quality improved, as did cost and productivity, and, ultimately, profitability. Deming was a genius; his impact, profound.

Where’s the Dilemma?

With a success story like this, you are probably wondering where there could possibly be a dilemma, and what any of this has to do with managing safety performance.

But you have only heard half the story-and the better half at that. Remember, every good dilemma has two conditions that are equally valid, and totally incompatible. Here is the rest of the story. Deming was right: there are systems, and those systems are often a significant factor in determining results. Thus, the best way to change performance is to change the system. However, the actions of individuals also make up a significant factor in determining results. For starters, people are one of the components that make up the system. Unlike all the other components, people come fully equipped with the ability to choose how they act. That is not a minor difference.

In short, individuals are creatures of the system-but always willing creatures. That is what makes the System Dilemma.

Implications

Suppose you do not like the results and the behavior of people in the system. Who bears responsibility? Follow Deming’s logic and you might wind up someplace you would rather not be. If the system determines behavior, you would conclude that the system bears the responsibility. But how do you hold a system accountable? Are you willing to let the system excuse the behavior of individuals?

Here is a way to test your conclusions: suppose the behavior involves possible accounting fraud? Any of the headline cases from the business pages-such as WorldCom, HealthSouth, and Enron-serve as perfect examples of the dilemma. You know the story: accounting irregularities on a massive scale deprive shareholders of billions. The accountants plead guilty, saying that their bosses made them do it.

Would you excuse the behavior of the accountants? After all, they are just creatures of the system, right? If you are not buying that logic, you are in good company. Neither did the prosecutors, who argued successfully that accountants at these three companies knowing violated the law. You can find a couple dozen of them now serving time in the federal penitentiary. If you were a shareholder, you would probably like to see the corporate bosses sitting in the pen with them.

The Wrong Stuff

Now for the finishing stroke to the System Dilemma: its relationship to safety.

You have seen the scenario: everybody knows there are problems, but they all just go along because nobody wants to rock the boat. That’s the power of the system. Then, something really bad happens: a horrible accident, a major loss, and people get seriously hurt. History is replete with accidents that are now household names: Longford, Challenger, Bhopal. These were all situations where systems failed, and lives were lost. All the dirty laundry gets hung out in the accident investigation-and there is always a full load. Systems fail because people fail. Viewed as a past event, it is always obvious that any number of people had plenty of opportunity to prevent the accident. But they did not do it.

Why, if it’s wrong, did so many go along for the ride? As the saying goes, no snowflake in an avalanche ever feels responsible.

Sam Rayburn of the House of Representatives famously said, "If you want to get along, go along." The pressure to "go along" is enormous and often undesirable. We tell our school-age children to think for themselves and not do what everyone else is doing just to fit in with the crowd they want to impress. But what happens at work? We turn around and do exactly the opposite. It is called running with the herd. The system can be cruel to those who push from within to change it. Pick your favorite martyr: There are many to choose from.

Maybe there is a safety problem. Maybe we know something is wrong. But we act just like everyone else in the system: it is not our problem. We do not own the system and could not change things if we tried. So we do nothing.

The System Dilemma

There is the System Dilemma in full relief: a system exists that determines performance, and individuals determine their own performance. Both statements are true.

Ignore the first, and results will not change for the better. Ignoring the second is an invitation for irresponsible behavior. People do not have to go along if they truly do not want to. Deciding to falsify financial records is simply a matter of pleasing the boss, getting a bonus, and keeping the job instead of doing the right thing.

The same holds true for safety issues. Taking shortcuts, signing waivers, and ignoring the warning signs of unsafe equipment are all choices. The simple solution to the System Dilemma is this: Nobody has to make the wrong choice.

The Middle Dilemma

"He who walks in the middle of the road gets hit from both sides."
-George Shultz

Credit Bob DuBrul and Dr. Barry Oshry with inventing the term "Middle Dilemma" some twenty years ago. As systems consultants, Oshry and DuBrul had an uncomplicated way of looking at organizations. No matter what the nature of the organization, there were only three roles that mattered: tops, middles, and members.

Oshry and DuBrul’s principal interest was in the role played by those in the middle, who link members with tops. Their appreciation of the middle role came from their work with a wide range of middles: waiters; camp counselors; church pastors; and, yes, supervisors and managers in the world of industry. You can see by this list that "top" and "member" describe a wide variety of roles that are not limited to our traditional view of leaders and followers.

The model may seem simple (the best ones always are), but that does not mean it is useless-particularly as it explains the difficulties faced by those functioning in the middle of a system.

It’s Lonely in the Middle

A waiter, who links the customer and the kitchen, provides the perfect illustration of the difficulty of life in the middle. The waiter takes the order, the kitchen staff prepares the food, and the waiter serves the meal. When the food does not meet expectations, guess who bears the brunt of the criticism from the customer? It is not the chef. He is back in the kitchen, far removed from that heat.

The waiter has no control-and very little influence-over what goes on in the kitchen. The kitchen staff is completely insulated from the customer: they never have to deal with their own failures. That is a duty left to the waiter (who, by the way, is working for tips). How much of a tip do you think an unhappy customer leaves for the waiter?

You can see how DuBrul and Oshry were on to something. Middles play a vital role, but it is one that leaves them feeling powerless and all too often caught in the crossfire between the two parts of the system they connect. It is a frustration that most of us have experienced at one time or another. When it comes to managing safety performance, however, the Middle Dilemma can be more than frustrating: it can be downright dangerous.

The Middle Dilemma and Managing Safety

By now, if you play any sort of middle role in your organization, you have probably jumped ahead to the connection between Middle Dilemma and managing safety performance.

Just like our waiter, those working in the middle in the world of operations feel the brunt of the decisions and actions made by those at the top of the organization. They hear about it from the members they manage; sometimes they even get to witness firsthand what happens when things go wrong.

Middles also provide a second function that is not always entirely beneficial: they serve as a layer of insulation between members and the top. It might make life at the top a little bit easier, but it can also mean that important information gets bottled up. When that happens, the tops do not know what the middles know about what is really going on, which is not a good thing. It is not necessarily the fault of those at the top. Those in middle roles-with job titles like front line supervisor, area superintendent, and process engineer-live where the real work of the organization takes place. They are familiar with all the important details that matter about safety performance: information like the real qualifications of those performing the work, the true condition of the equipment, how well policies and procedures are being followed, and what the real performance data look like. Said another way, the middles know reality; the tops may very well not.

"You Can’t Handle the Truth."

If those at the top always acted as though they understood this and were hungry for the truth, there probably would not be a safety version of the Middle Dilemma. But tops do not always want to hear the details about organization reality. It is messy, confusing, and sometimes contradictory to what they would prefer to think is reality.

That puts the middles on the horns of the dilemma. A middle can tell management all and be branded as an alarmist and obstructionist. Or, a middle can drive reality underground and be viewed as a "can-do" type. The path of least resistance is always easier-at least until there is a serious problem. When that happens, the tops are shocked to learn what is really going on. "How could you let that situation exist?" they ask the middle. There is never a good answer to that question. Know the feeling? If you do, you’ve got plenty of company.

A famous midnight conference call in 1986 between NASA’s space shuttle management team and their rocket contractor illustrates the problem as well as any ever documented. The contractor offered the sound, engineering-based reasons why the shuttle should not be launched at a temperature lower than fifty-three degrees. Frustrated by the implications, a senior NASA official blurted out, "When do you want me to launch, next April?"

Remarks like that, coming from tops, usually have a significant effect. In this case, the contractor put on its "management hat" and decided that in this situation, perhaps the science was not quite that important, to disastrous results.

The Middle Dilemma

Now that we have the Middle Dilemma out for public inspection, what can we do about it? It is a serious dilemma-one that has proven to be fatal on more than one sad occasion. It is not enough to describe it here. We feel obligated to offer some ideas about how to manage the dilemma.

In the Middle Dilemma, most of the important action happens between the middle and the top. The two principal factors to be dealt with are the restricted flow of important information up the chain of command and the insulation of the top from organization reality. It may well take all the diplomatic skills of Henry Kissinger to manage in the middle, but it is well worth trying. Besides, what other options do you have?

1. Don’t hide the truth.

NASA recognized that a big part of its cultural problem stemmed from the fact that top management had become cut off from science and engineering. Read that as "removed from the hard facts about reality."

Let’s face the truth. As middles, much of this is a problem of our own making. Our desire to make our operation look good to those at the top causes us to act in very predictable ways: cleaning up the place before the big visit; showing the boss the newest and best part of the operation instead of the oldest and worst; and putting the best spin on the hard data about conditions and performance. Given two versions of reality, we are inclined to report the better case.

Try tilting in the direction of the center. Disclose some of the bad with the good. Air a little bit of your dirty laundry. In the short term, you might not look as good; but in the long term, you probably will be better off with the top understanding reality.

2. Present reality better.

Yale professor Edward Tufte has built a successful career teaching how to present reality better. A master of the chart, he teaches that the conventional means of communication, such as PowerPoint slides, do not present reality well. Tufte says, "?the popular PowerPoint templates (ready-made designs) usually weaken verbal and spatial reasoning, and almost always corrupt statistical analysis."

Having sat through countless technical and management presentations to the tops, it is clear that most of us middles do not explain things very well. We know reality, but all too often it gets lost in a flood of acronyms and confusing data presented in a rapid succession of PowerPoint slides. Take a lesson from those in the advertising business: keep the message simple and do not be reluctant to repeat it.

If all else fails, try communicating the old fashioned way. Talk to people. When Louis Gerstner became president of IBM, he sent a powerful message to his organization when he turned off the projector and said, "Let’s just talk about your business."

3. Remind those at the top what is really at stake.

We have all gotten into the practice of using sanitized language to describe serious problems. It is no longer an emergency, it is a "non-routine event." Things are going haywire, and we call it an "abnormal situation."

That might be a good way to quiet the hysteria and avoid offending. It also can lull us into ignoring a serious problem. The CEO at Alcoa, after hearing a report of a fatal accident involving a twenty-year-old employee, turned to his senior management team and said, "We killed him."

Sometimes a little bit of blunt language is just what is needed to inject a healthy dose of reality into the situation. Bottom line: proceed with caution. But by all means, proceed.

In the 1800s natural refrigeration was a vibrant part of the economy. Natural ice harvested from the pristine rivers and lakes of the northern United States, particularly those in New England, was in demand. Harvested ice was stored in large quantities in ice houses and covered with sawdust for insulation.

Later, merchants loaded the ice in sailing ships as ballast. Again, the ice was covered with sawdust. Ice was delivered to as far away as India, where it was welcomed, and to England, where interest was low. The supply of harvested ice was erratic, depending on the weather where it was harvested.

During the 1800s many mechanical-type refrigeration systems were being invented and used refrigerants such as sulphur dioxide, methyl chloride, ether, carbon dioxide, as well as wine, brandy, vinegar, etc.

The early refrigeration systems designed between 1850 and 1920 produced ice year-round to compete with harvested ice. The harvested ice producers advertised that-when it was available-their natural refrigeration did not fail like the early mechanical systems.
Several approaches existed for ice manufacturing during the early days. A very labor-intensive method used a series of 10 × 14 ft* plates immersed in water with a refrigerant of ammonia or circulated brine. Ice formed on both sides of the plates. This approach provided ice without any air bubbles and used potable water. The ice was harvested with warm brine or hot gas and cut to size for sale.

The other approach, still used today, was the can ice system. The complaint then was that it required distilled water to prevent air bubbles in the ice. This ice manufacturing method prevailed because it was simple and less labor-intensive than the plate method. As in the early days, 300 lb** cans are used today to manufacture ice.

As early as the 1880s, Carré ammonia absorption systems operated in south Texas. They were used to manufacture 1,000 lb of ice per day. The absorption machine was fired with wood. According to one author, O. Anderson, this ice competed with harvested ice shipped to Texas from Boston. The machine was in Austin where two ice plants existed, one a plate ice machine and the other a can ice machine. In Chicago a company built several plate ice machines that were shipped to the King Ranch in south Texas. These plants eventually were converted to can ice. The refrigeration system was a mechanical compressor with a steam drive.

The ice industry continued to grow, and large plants with ice capacities to ice 20 railcars at a time were installed from the Rio Grande valley to the east coast. The plants made ice in 300 lb cans and crushed the ice before blowing it into the railcar’s bunkers. Most used steam-driven ammonia compressors. Atmospheric condensers also were appearing, as the supply of well water or river water was insufficient.

The condenser water was cooled by a spray system on the roof of the ice plant or sprayed over a pond. Gas engines and diesel engines replaced some of the steam drives as time went on and fuel prices became affordable. A number of block ice plants were installed as late as the 1950s. The last block ice plant installed in the country was in Houston during the early 1980s. It was a 100 ton/day** plant.

Some refrigeration systems installed in the early 1900s used carbon dioxide (CO2) as the refrigerant. The systems required condenser water cold enough to prevent the system from operating in the supercritical area (under 750 psig**). CO2 was available from the brewing industry at a lower cost than ammonia.

The condenser water usually came from a well or from a river. The compressors were adaptations of ammonia compressors that were modified with smaller diameter pistons to account for the much lower specific volume of CO2. Further use of CO2 occurred in the 1930s when it was used as a cascade refrigerant with ammonia as the high side to obtain low temperatures for freezing foods and to provide refrigeration for cold storages. CO2 is used the same way today to obtain temperatures to -66°F**, while reducing ammonia charges and decreasing the size of the low-temperature compressors.

Ammonia absorption systems, the first systems to use ammonia as the refrigerant, still are used despite their inefficiency and high capital cost. These systems were viable into the 1950s and 1960s where waste heat, process heat or waste steam was available. This type of system was used in the very early days to obtain lower refrigerant temperatures than mechanical systems.

The classic ammonia absorption system was refined in the 1930s and 1940s to produce nearly pure ammonia. Most of these systems were installed in petrochemical or gas plants. They have been installed in the last few years in conjunction with gas turbine generating systems to satisfy government regulations. New versions of ammonia absorption systems are being designed today with much better efficiencies.

In the early 1900s reciprocating compressors were refined. However, they still operated at low revolutions per minute. They were driven by reciprocating steam engines, which were integral with the compressor. These compressors were called open compressors, as each connecting rod had a packing to prevent ammonia from escaping. Most compressors were vertical with the steam engine horizontal. Speeds were gradually increased, and synchronous motors eventually were used instead of the steam engines. Some horizontal slow-speed ammonia compressors were built in these years. In the 1980s some of these compressors were still being operated successfully. They were motor driven with flat belts.

In the 1920s/1930s, more advances occurred in the ammonia compressor area. Most of the machines had crankcases and were considered closed compressors. The only seal involved was one on the crankshaft. These machines were more compact and generally more efficient. They were driven by synchronous motors at speeds to 360 rpm and belt-driven. Several U.S. and European manufacturers produced such machines.

During these years the cold storages and freezers used bare-pipe coils, usually 1¼ in. steel pipe, along the walls and in bunker form on the ceilings.

Defrosting the coils was difficult in freezers as removing the ice from the rooms was strictly manual labor. The coolers’ condensate drains also were numerous. The coils fouled with oils from the compressors because oil separation systems were not perfected. This type of evaporator was still in use in the 1980s. In the late 1930s and 1940s, finned surface evaporators began to displace the bare-pipe coils. Most of the finned surface evaporators were originally water defrost. Most of the refrigeration systems still were using hand expansion valves to feed ammonia to the generally flooded evaporators. The condensers were either vertical or horizontal shell-and-tube type with spray ponds or roof spray systems to cool the condenser water.

In the late 1930s and early 1940s new compressor development produced a v/w-type compressor that operated at rotating speeds to 1,200 rpm and in sizes from 20 to 300 hp. These machines were smaller, lighter, and less expensive than the older vertical compressors, which were discontinued by some manufacturers in the 1950s.

Slightly better oil separators also were developed. These machines have been improved engineering-wise over the years and are still viable compressors for smaller refrigeration systems. Most manufacturers of this type of compressor have ratings for all refrigerants including ammonia, halocarbons, hydrocarbons, and CO2.

For low-temperature applications, the v/w compressors performed well until refrigeration systems became larger and required increased compressor displacement. Thus, in the l940s and 1950s, the rotary air compressor was converted to refrigeration applications as a low-stage or booster compressor. These machines were fixed Vi and not very efficient, but could pump a lot of gas. The only capacity control was by varying speed.

These compressors were used for various refrigerants in low-temperature applications. Most of those applied in ammonia systems have been replaced by helical screw compressors.

In the late 1940s and even into the 1950s, automatic hot gas defrost systems started to replace water defrost for air units in freezer applications. And the days of electric defrost and air defrost also were numbered.

The l950s and 1960s saw a major increase in the frozen food industry, which had been given its impetus when Clarence Birdseye learned how to process vegetables for freezing. Larger plants became plentiful. The refrigeration systems’ demands became larger, making the rotary compressor and the v/w compressor displacements inadequate. The huge equipment rooms and multiple compressors posed a major concern for food plant engineers. Then, use of the liquid overfeed (liquid recirculation) system came into its own. It had been patented in 1925 but rarely seen until the systems became large enough to justify its application. Evaporative condensers took the place of the shell-and-tube and cooling tower systems. The evaporative condenser had been around since the late 1930s, but its use had been limited.

In the late 1960s, the helical screw compressor, invented in Sweden in 1935, was being manufactured in several countries, but not in the U.S. A small design/build contractor brought the first ammonia screw compressor into the country and packaged it as a booster compressor. This was a major addition to the industrial refrigeration compressor field. These efficient compressors had far more displacement than the available reciprocating compressors. This compressor type was being imported by a company that was using them on R-22 air-conditioning water chillers. Helical screw compressors can be used for almost any high-pressure refrigerant.

The early screw compressor packages had relay logic controls, some of which still are operating today. Microprocessors replaced the relay logic controls in the late 1970s. Programmable controllers also are used to control these compressors. These controls and the many advances in screw compressor design provide excellent screw compressor control.

During the last 30 years, screw compressor designs have improved in the
following ways:

  • from the original handwheel slide valve (capacity reduction) adjustment to a hydraulic system,
  • from symmetric to asymmetric profiles for better efficiency, to roller bearing systems from sleeve bearings for better efficiency,
  • from fixed Vi to variable Vi to increase system efficiency,
  • from mesh-type oil separators to coalescing oil separators to reduce oil carryover,
  • to better internal designs to reduce noise, and
  • to much larger displacement machines to satisfy the demand for larger refrigeration systems.

In the late 1970s, a single screw compressor was introduced into the global market to compete with the twin helical screw compressor in the smaller displacement area.

Food freezing system designs have changed dramatically since Clarence Birdseye’s day. The initial freezing system was a horizontal plate device that froze vegetables in packages. The refrigerant was usually ammonia, and the units were quite small. They were great at the time but were quickly outgrown. Plate-type freezers, both horizontal and vertical still are used today to freeze vegetables and other items, such as TV dinners in packages or various commodities in bulk blocks.

In the late 1930s through the 1950s, belt freezers were used to freeze vegetables such as peas, cut corn, lima beans, etc. Most used downdraft air onto a thin layer of the vegetables. Various attempts were tried to keep the products individually quick frozen (IQF). Nearly all did not work and "cluster busters" broke up the product that often was frozen in sheets.

In the 1960s, fluidized freezing was introduced. The major difference was that the air was blown from under the belt and at a velocity that would fluidize the product (keep it in suspension).

Various methods of initiating fluidization were tried. Some worked much better than others. Later, a dual-belt system was devised. The first belt was loaded with a thin layer of product to be frozen. Most vegetable and fruit products have a thin layer of water on them when introduced into the freezer. The first belt is used to quickly freeze the thin layer of moisture when it is introduced to -20°F** air. The product is then lowered onto the fluidizing belt where it can be 6 to 10 in. deep and can be frozen in usually eight to 10 minutes, depending on the product. Many variations of IQF freezers are available today. This type of system has been applied to freezing 100,000 lb of french fries an hour.

Larger products that require longer freezing times are usually frozen on spiral freezers. This freezer usually handles such products as hamburger patties, poultry parts, pies, cakes, packaged items, etc. Several variations of spiral freezers exist, some having more than 3,000 ft of belting to provide freezing times as long as an hour.

All of these technological advancements have led to today’s efficient freezing methods for making ice and producing frozen foods.

Acknowledgments: Art for this article is from ASHRAE’s "Heat & Cold: Mastering the Great Indoors" by Barry Donaldson and Bernard Nagengast. The following article was published in ASHRAE Journal, November 2004. © Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes only. This article may not be copied and/or distributed electronically or in paper form without permission of ASHRAE.

Materials used in the construction of marine vessels and offshore installations are subject to a number of regulatory requirements. The regulations vary, depending on the type of vessel and its geographical area of operation. Rules and regulations produced by the International Maritime Organization (IMO), flag states, agencies, regulators such as the United States Coast Guard (USCG), and classification societies such as the American Bureau of Shipping (ABS) can apply to insulation products. Unfamiliarity with the intricacies of the regulations can lead to specification and installation of
materials with improper certifications. An understanding of the regulations, their areas of application, and interrelationships helps prevent costly mistakes.

This article discusses the passive fire protection requirements for insulation products installed onboard marine vessels, along with the regulatory framework and relationship between the various regulations. Application of the regulations to various vessels is then illustrated through examples of different vessels used in different operational activities.

Regulatory Environment

Self-propelled vessels over 500 gross tonnes involved in international trade are subject to the IMO Safety of Life at Sea (SOLAS) requirements.1 Similarly, Mobile Offshore Drilling Units (MODUs) involved in international operations are subject to the IMO MODU Code requirements.2 In addition to these international regulations, many countries—known as flag states—have their own requirements for vessels. For U.S.-registered vessels, the United States, through the USCG, implements the requirements contained within the Code of Federal Regulations.

In addition, many individual vessels are “classed” with a marine classification society, and are thus subject to additional class society requirements. Many of the Classification Society rules are aligned with the IMO regulations.

SOLAS-certificated Vessels

Vessels that hold a SOLAS certificate are required to comply with the SOLAS regulations. Specifically, the passive fire protection requirements affecting the use of insulation products are contained within SOLAS Chapter II-2, entitled “Construction ? Fire Protection, Fire Detection and Fire Extinction.” Relevant SOLAS regulations are noted below.

  • Reg. 4.4.3: In spaces where penetration of oil products is possible, the surface of the insulation shall be impervious to oil or oil vapours.
  • Reg. 5.3.1.1: Insulation materials shall be non-combustible, except in cargo spaces, mail rooms, baggage rooms and refrigerated compartments of service spaces.
  • Reg. 5.3.1.1: Vapour barriers and adhesives used in conjunction with insulation, as well as the insulation of pipe fittings for cold service systems, need not be non-combustible materials, but they shall be kept to the minimum quantity practicable and their exposed surfaces shall have low flame-spread characteristics.
  • Reg. 6.2: Paints, varnishes and other finishes used on exposed interior surfaces shall not be capable of producing excessive quantities of smoke and toxic products.

The terms “non-combustible,” “low flame-spread,” and “not capable of producing excessive quantities of smoke and toxic products” refer to characteristics of materials determined by conducting specific fire tests. These tests are outlined within the IMO Fire Test Procedures (FTP) Code.3

Materials that are required to be non-combustible are to be tested in accordance with IMO FTP Code Annex 1, Part 1. This testing, conducted in a refractory tube furnace, uses the procedures outlined in International Organization for Standardization (ISO) 1982:1990.4 A material is considered non-combustible, provided that:

  • The average mass loss does not exceed 50 percent,
  • The mean duration of sustained flaming does not exceed 10 seconds,
  • The average furnace thermocouple temperature rise does not exceed 30 degrees C, and
  • The average surface thermocouple temperature rise does not exceed 30 C.

Low flame-spread materials are tested in accordance with FTP Code Annex 1, Part 5, using the test apparatus and procedures outlined in IMO Resolution A.653(16).5 Materials that satisfy the surface flammability criteria listed in Figure 1 meet the low flame-spread requirements.

Materials that are required to be not capable of producing excessive quantities of smoke and toxic products are to be tested in accordance with IMO FTP Code Annex 1, Part 2. The optical density and concentration of seven toxic gas species are measured following procedures outlined in ISO 5659-2:1994.6 Figure 2 lists the maximum permissible concentration of each of the seven species.

U.S.-Registered Domestic Service Vessels

Vessels that are registered in the United States and involved in domestic service are subject to the U.S. domestic requirements as outlined within the U.S. Code of Federal Regulations (CFR), specifically Title 46.7 Additional guidance is provided within individual documents including Policy File Memorandum (PFM) 1-008 and Navigational and Vessel Information Circular (NVIC) 9-979 published by the USCG. Applicable domestic service regulations vary, depending on the category of vessel. A selection of insulation products and their associated USCG approval categories are listed in Figure 3.

Attention should be paid to the installation method for marine insulation products, with consideration given to the expected movement of the vessel, the normal wear expected due to the vessel’s operation, and the thermal effects of a fire. The specific method of attachment of the insulation product depends upon the method of approval. Products tested in accordance with IMO Resolution A.754 (18),10 are to be installed in the same manner as the fire test. Products tested under 46 CFR 164.007 are to be installed in accordance with the manufacturers’ guidelines, as approved by the USCG. Products that do not have specific manufacturers’ installation instructions are to be installed using welded steel pins and clips. The pins shall have a 3-millimeter diameter at minimum and shall be spaced in a 0.3-meter grid pattern.9

When applied to bulkheads, insulation shall be applied over stiffeners and closely fitted around other protrusions, extending for a distance of 300 millimeters from the bulkhead. Similarly, where an insulated bulkhead intersects an un-insulated bulkhead, the insulation is to be overlapped for a distance of 300 millimeters.9

Combustible adhesives may be used to secure insulation and vapor barriers against pipe or ductwork but shall not fail early in a fire and shall be applied in limited quantities. Combustible adhesives shall not be in direct contact with heat sources.9

Summary

The regulatory environment for insulation products in a marine setting can be complicated due to different international, domestic, and classification society requirements. An understanding of the applicable regulations and the specific fire test requirements can ensure that the appropriate products are specified and installed on each vessel.

References

1. International Convention on the Safety of Life at Sea (SOLAS), 2000 Amendments, IMO, London, England. 2. Code for the Construction and Equipment of Mobile Offshore Drilling Units, Resolution A.649 (16), 2001 Consolidated Edition, IMO, London, England. 3. International Code for Application of Fire Test Procedures, Resolution MSC. 61(67), 1998, IMO, London, England. 4. Reaction to Fire Tests for Building Products: Non-combustibility Test, ISO 1982: 1990, ISO, Geneva, Switzerland. 5. Recommendation on Improved Fire Test Procedures for Surface Flammability of Bulkhead, Ceiling and Deck Finish Materials, IMO A.653 (16), IMO, London, England. 6. Plastics ? Smoke Generation ? Part 2: Determination of Optical Density by a Single-Chamber Test, ISO 5659-2:1994, International Organization for Standardization, Geneva, Switzerland. 7. Code of Federal Regulations, Title 46, Shipping, U.S. Government Printing Office, Washington, D.C., U.S.A. 8. Policy File Memorandum, PFM 1-00, Implementation of the FTP Code, USCG, Department of Homeland Security, Washington, D.C., U.S.A. 9. Guide to Structural Fire Protection, Navigational and Vessel Information Circular, NVIC 9-97, USCG, Department of Homeland Security, Washington, D.C., U.S.A. 10. Recommendation on Fire Tests for “A”, “B” and “F” Class Divisions, IMO A.754 (18), International Maritime Organization, London, England.

Figure 1
Figure 2
Figure 3

The ASTM Task Group E5.11.02 for ASTM E814, chaired by John Valiulis of HILTI, Inc., is considering the addition of an air leakage standard to E814.

The air leakage could be similar to what exists in UL1479 for simulation of smoke movement through firestop systems. Air leakage—“L” ratings—are the industry’s suitability statement for use of firestop systems in fire- and smoke-rated assemblies. The International Building Code has new requirements for firestop systems for L ratings less than five cubic feet per minute per square foot of opening area (cfm/sf).

The Perimeter Fire Containment task group produced ASTM E2307, Standard Test Method for Determining Fire Resistance of Perimeter Fire Barrier Systems Using Intermediate-scale, Multi-story Test Apparatus. This new test determines suitability for use of products in perimeter fire containment systems, evaluating performance of the perimeter fire containment system in preventing fire from spreading floor to floor through the void safing slot area between the floor slab edge and the exterior curtainwall system. At the 2005 Windsor Building high-rise fire in Madrid, Spain, for example, rapid fire spread occurred due to lack of perimeter fire containment systems.

A perimeter fire containment “leap frog” standard task group, the E05.11.20 development committee, also met in 2005. This group will continue to refine requirements in this important area in 2006, according to Thermafiber, Inc.’s Jim Shriver, Chair of the committee. Task Group E05.11.22 is studying smoke barriers and smoke partitions.

In another task group headed by Firestop Contractors International Association’s (FCIA’s) Randy Bosscawen (of Multicon Fire Containment), a new standard for qualifications of firestop inspectors is under development. The task group sent an early edition of the standard for ballot and received several negatives, most of which resulted in changes included in the next draft. Another meeting will take place soon to bring several viewpoints into the standard development process. This standard will be a companion standard to the existing ASTM E2174 and ASTM E2393 standards.

FCIA Manual of Practice Updated

FCIA’s Firestop Industry Manual of Practice (MOP) had more than one hundred pages updated in 2005. Testing, products, and maintenance sections all were rewritten by the FCIA Technical Committee. The MOP focuses on many aspects of the firestopping industry and highlights the quality protocols needed to install firestopping to the systems documentation from various testing directories. “FCIA will be working on three more sections in 2006 because the MOP is a living document,” said FCIA Technical Committee Chair Mike Dominguez of Firestop Specialties, Inc., Miami, FL.