Improving validity of on-farm research
Lee J. Johnston, PhD; Antonio Renteria, DVM, PhD; Michael R. Hannon, DVM, PhD
LJJ: West Central Research and Outreach Center, University of Minnesota, Morris, Minnesota; AR: CENIFMA-INIFAP-SAGARPA, Mexico; MRH: Buffalo, Minnesota. Corresponding author: Dr Lee J. Johnston, University of Minnesota, West Central Research and Outreach Center, 46352 State Hwy 329, Morris, MN 56267; Tel: 320-589-1711; Fax: 320-589-4870; E-mail: firstname.lastname@example.org.
Cite as: Johnston LJ, Renteria A, Hannon MR. Improving validity of on-farm research. J Swine Health Prod. 2003;11(5):240-246. Also available as a PDF.
On-farm research receives much attention from swine producers and industry professionals, and is often perceived by pork producers to be more relevant to real-world commercial pork production than controlled experiments conducted in university settings. Swine producers and industry professionals must realize that retrospective analysis can identify associations among variables of interest but provides no evidence for a causal relationship between a manipulated variable(s) and a production response. As substantial commitments of time and resources are required for a properly conducted experiment, producers should give careful consideration to undertaking on-farm research. To generate useful information in on-farm experiments, one must adhere to principles of scientific inquiry; maintain integrity of the production system; willingly commit labor and financial resources; and pay attention to details. Effective communications with farm owners, barn workers, and other decision makers are crucial. Advice of a statistician on experimental design and statistical analysis of data, before initiating the study, helps ensure conclusions are valid and defensible. Ideas to enhance the success of on-farm research are presented.
Keywords: swine, experimental design, research methodology
Search the AASV web site for pages with similar keywords.
Received: November 22, 2002
Accepted: January 3, 2003
Approaches to pork production research
Livestock producers aspire to achieve a more thorough understanding of the biology of their production system so that they can manage that system with optimal efficiency. A wealth of data collected under a variety of conditions is required to gain a full understanding of a swine production system. Data may be collected under tightly controlled conditions at universities or research institutes, in controlled studies conducted at production units (on-farm research), and through retrospective analysis of commercial production records. Testimonials from practitioners in commercial production settings may provide some information about a production system. Each of these approaches possesses inherent strengths and weaknesses.
Data collected at universities and research institutes
Experiments conducted at universities and research institutes have the primary advantage of strict control over most variables that might affect the outcome of the experiment. Generally, these experiments are designed to produce precise results that answer questions about how and why a particular treatment elicits an observed response.1 This high degree of control allows the investigator to gain confidence that the observed responses are attributable to the treatments imposed and not due to unseen differences (confounding variables) between the control and treated animals. Confounding variables, such as characteristics of the population being studied (eg, genetic line, age, health status, nutritional history) and the environment in which the experiment is conducted (eg, ventilation rate, pen or crate design, season, geographical location) are tightly controlled so that the only differences between control and treated animals are the imposed treatments. As strict control of experimental conditions is very costly, the experiment usually involves a relatively small number of animals. In addition, tight control of confounding variables creates a somewhat "artificial" situation that may not reflect commercial production systems where the results will be applied.
On-farm research (field trials)
On-farm trials or field trials are conducted at farms involved in commercial production. One can easily argue that this setting provides the true test of whether a management practice or treatment has any utility, because commercial farms offer production conditions that are not present in animal facilities of universities or research institutes.2 While university research determines how and why a technology works, on-farm research focuses primarily on which technology should be applied under practical conditions, and what results may be expected.1 On-farm trials allow one to evaluate the efficacy of a new technology in a specific production system. One may conclude that if the intervention works under field conditions, with all the inherent variation present in the system, then the intervention truly is efficacious. This conclusion may be accepted only if one is fairly certain that coincident changes in confounding variables are not responsible for the observed response. For instance, if the performance of a new feeder for lactating sows is compared to that of existing feeders in the farrowing quarters and determined to be superior, one could easily conclude that the design of the new feeder is better suited for lactating sows than the existing feeder. In reality, the new feeder may not be a better design. The new feeder simply may be operating properly because it is new, and the existing feeders are old, worn, and not working properly. In this instance, maintenance of the existing feeder may have a larger influence on selecting the superior feeder than design of the feeder.
Retrospective analysis of commercial production data may provide a useful tool to gain some insight into relationships among production variables. Large numbers of observations made during extended periods of time are characteristic of this type of analysis. The investigator cannot control confounding variables and may have limited information to adjust for these confounders in the statistical analysis. Retrospective analysis may identify associations among variables of interest, but provides no evidence for a causal relationship between one or more manipulated variables and a production response. Nonetheless, apparent relationships identified among production variables in retrospective analyses may be tested using carefully designed prospective treatments imposed in controlled university or on-farm experiments.
Statistical process control
Statistical process control (SPC) is an approach to the analysis of data collected on farms, which allocates total variation associated with a production process to common causes and special causes. Common causes are innate to the process and are always present. Special causes are occasional disturbances to the process that appear in a somewhat unpredictable manner. Limits to the variation around a central mean are calculated for each production process on the basis of sample size, overall mean for the process, and the average range of observations.3 Variation within these calculated limits is considered normal and not a cause for intervention in the production process, while variation outside these limits suggests that the production process is out of control and some intervention is warranted. Results of SPC analysis are presented in chart form to graphically depict variation in a process. Statistical process control charts have been used widely in the manufacturing sector to detect variation in production processes or products. Several authors have argued that use of SPC charts may be a valuable tool in monitoring swine production systems.4,5 However, quantitative evaluation of SPC procedures in livestock production systems has not been reported.6 A thorough understanding of SPC procedures7 and large numbers of observations are critical to extract value from SPC approaches.
Testimonials provided by swine practi-tioners or swine producers are evaluations of a technology or intervention based on unstructured observations rather than controlled experimentation. Testimonials may help identify interesting areas for future research, but they are clouded by personal biases of the observers, and should not be used as a basis for decision making in a swine production system. If similar observations are reported in a variety of production systems, one may design a controlled experiment to determine whether the perceived cause-and-effect relationship is real.
Guiding principles for valid on-farm experiments
Pork producers and industry professionals seem to have an intense interest in on-farm research. This interest seemingly stems from the fact that experiments are conducted in their facilities, so the results are tailored to their production system. Furthermore, tangible results are generated that producers can see and experience personally. The central question is, "Are the results valid?"
There is a dearth of published information to guide swine veterinarians and other consultants in the conduct of on-farm experiments. Edwards-Jones8 discussed the merits of on-farm research in developed countries, such as the United Kingdom, from a societal viewpoint, but provided little direction on how to conduct trials on commercial farms. Several authors1,9,10 provided a stepwise guide to conducting on-farm research in developing third-world countries. The vast differences in culture, resources, production systems, and technical expertise between producers in developing countries and producers utilizing capital-intensive, technologically advanced production systems limits seamless adoption of the approaches suggested by these authors. However, some fundamental principles of on-farm research are transferable to modern production systems and will be presented below.
Adhere to principles of scientific inquiry
The primary objective of on-farm research is to obtain, in a commercial production setting, a valid, defensible answer to the question being studied. To achieve a valid answer, one must adhere to basic principles of scientific inquiry. A full discussion of these principles is beyond the scope of this paper. The reader is referred to other authors for a more complete discussion of these basic principles.11,12
Experimental unit. The investigator must select the proper experimental unit, which is defined as the smallest entity to which one application of a treatment is applied.13 In swine production facilities, experimental units might be individual sows in stalls, pens of pigs, animals in a room within a barn, or animals in a barn. An individual sow housed in a stall may be an experimental unit, because one sow may receive the control treatment while an adjacent sow receives the experimental treatment. When sows are housed in groups, a pen of sows may be the experimental unit. In most experiments conducted with nursery, growing, or finishing pigs within one room or barn, pen is the experimental unit, because all pigs in the pen are exposed to the same treatment. However, two adjacent pens that share a fenceline feeder constitute only one experimental unit for a nutrition experiment, because all pigs in both pens are exposed to the same treatment.
Coefficient of variation. The coefficient of variation (CV) provides a measure of inherent variation in a trait and is expressed as a percentage. The CV for a trait is calculated by dividing the standard deviation of the treatment mean by the treatment mean, and then multiplying the result by 100.14 The CV measures the unexplained variation that occurs among experimental units that are treated alike. This is referred to as experimental error. High CVs make treatment differences difficult to detect, while low CVs make it easier to detect treatment differences.
Sample size. Statistical procedures are available to calculate the minimum sample size necessary to produce a reliable result with the desired sensitivity. This defines the power of the experiment. Sample size calculations are unwieldy for those inexperienced with their use. Berndtson15 simplified use of sample size calculations by developing a series of tables in which sample size is based on the CV for the trait of interest and the magnitude of difference one hopes to detect (Table 1). Sample size calculators are also available on the worldwide web.16, 17
The size of an experiment (number of replicates) is dictated by the CV for the response variable(s) of interest, the alpha level ([alpha]) selected by the experimenter, and the desired power of the experiment. As the CV for a response variable increases, and the level of [alpha] and desired power of the experiment remain constant, the number of replicates required to detect a statistically significant difference among treatments increases. Exercising control over confounding factors decreases the CV and allows experiments to be smaller but powerful.
Scientific inquiry is grounded on the development of two hypotheses: the null hypothesis (H0) and an alternate hypothesis (HA). The H0 assumes that no difference in the response variable(s) is caused by the imposed treatments. The HA states that the imposed treatments did elicit a difference in the response variable(s). Once the appropriate hypotheses are established, investigators select the [alpha] level they will use to evaluate the results. The [alpha] level is a measure of the probability that the H0 will be rejected (ie, treatments are different) when in truth there is no difference among treatments. Incorrect rejection of the H0 is called a Type I error. An [alpha] level of 0.05 (P <.05) indicates that there is less than a 5% or one in 20 chance that a Type I error will be committed, or, conversely, that the H0 will be accepted correctly 95% of the time. Acceptance of the H0 means that there is insufficient evidence for a difference among treatments. Accepting the H0 does not allow one to state with confidence that there is no difference among treatments. An [alpha] level of 0.10 (P <.10) means that the investigator will be wrong in believing that treatments did affect the response variables in one of 10 experiments, but will be correct 90% of the time.
The power of the experiment is determined by the probability that the investigator will incorrectly accept the H0 (ie, not detect a true treatment difference). This is a Type II error. The acceptable Type II error rate, designated as [beta], is often set at 0.20, which means that a true difference in treatments will not be detected 20% of the time. However, the investigator will accurately detect a difference (ie, accept the HA) 80% of the time. The ability to accurately detect differences is considered the power of the experiment (1 - [beta]).
The selection of a proper [alpha] level is a matter of much discussion among scientists and people who implement scientific findings in commercial production. An [alpha] level of 0.05 is widely accepted among scientists because it provides great confidence that the investigator will not mistakenly make claims about the effectiveness of a treatment. This is a conservative approach common among scientists. However, producers and practitioners working under commercial conditions may find a Type I error rate of 10% or 15% acceptable, especially if the cost of mistakenly adopting a technology is low. Decreasing the acceptable [alpha] level increases one's confidence that detected differences are real, but makes statistically significant differences more infrequent. Decreasing [beta] increases the power of the experiment, which requires increases in the number of replications, assuming that the CV stays constant. Increasing the power of an experiment increases the size of the experiment.
Replicates. Treatments must be replicated or repeatedly assigned to similar experimental units in sufficient quantity to assure a reliable result. In general, more replication is better than less; however, there is a practical limit to the capital and human resources that can be committed to an experiment. To determine the appropriate number of replications for each treatment, one derives an estimate of the CV for the trait of interest from previously reported research conducted by other investigators under conditions similar to those in the proposed experiment. The most valuable CV is one calculated from data collected under conditions the same as those in the proposed experiment (eg, genetics, housing, health status, nutrition). If the estimated CV and the magnitude of difference one would like to detect are known, one may refer to Table 1 to determine the numberof replicates necessary for adequate statistical power to detect treatment differences. For example, assume that the researchers would like to detect an increase in litter size from 9.5 to 10 pigs per litter at weaning, a 5% improvement. If the CV equals 10%, 63 sows per treatment, or 126 sows for an experiment with two treatments, will be required. However, if the CV is 20%, 252 sows per treatment will be required. The recommendations listed in Table 1 assume that the investigators accept a Type I error rate of less than 5% and a Type II error rate of less than 20%.
Accurate estimation of the CV is central to proper use of Table 1. Investigators involved in on-farm research may or may not have easy access to CVs for many traits of interest. Coefficients of variation for some traits of interest in on-farm research are presented in Table 2, but on-farm researchers should determine CVs for their own production systems. The CVs presented in Table 2 were derived from a random selection of reports recently published in the Journal of Animal Science. This information is intended as a general reference, not as a substitute for determining CVs that more closely reflect the conditions of a trial being designed for a specific production system. Berndtson15 provides a complete discussion of the issues surrounding proper selection of the CV for use in Table 1.
Randomization. Treatments must be assigned randomly to experimental units. Randomization ensures that all experimental units have an equal chance of being assigned to any of the available treatments.11 Randomization is the principle that workers in commercial units most often compromise in the interest of convenience and ease of implementing the experimental protocol. For instance, randomly selecting one row or section of gestation crates to house control sows and another row or section of crates for treated sows is not true randomization. One row may be nearer air inlets or cold outside walls or at the end of a feed line, which could create a different environment and potentially a differential response to the imposed treatments. While this approach may make record-keeping easier, reduce the chances of misapplying treatments, and improve labor efficiency, the potential for confounded results is also high, which subverts the primary goal of the experiment.
Random allocation of experimental units to treatments should be completed before the investigator sees animals that will be assigned to the experiment. The best way to accomplish this is for the experimenter to know which experimental units (eg, gestation stalls, farrowing stalls, nursery pens, farrowing rooms) will be available for the experiment and to assign a number to each available experimental unit. The number of each unit is written on a piece of paper, all of the numbers are placed in a container, and the researcher mixes them up and draws a number that is assigned to the first treatment. The second number drawn is assigned to the next treatment, and so on. The procedure continues until all the experimental units have been assigned to treatments. This procedure is simple, but is time consuming when large numbers of experimental units are involved. Alternatively, one can use the random number generator (RANDBETWEEN function) of Excel (Microsoft Corporation, Redmond, Washington) to randomly assign a treatment number to the experimental units that have been entered into the spread-sheet.
Maintain integrity of the commercial production system
The main reason for conducting on-farm research is to determine the response of animals housed and managed in commercial production systems. Consequently, the characteristics of the production system must be maintained to achieve this objective. Investigators must attempt to control as many confounding variables as possible to ensure a valid test of the technology or hypothesis being evaluated. However, if the character of the production system is lost in this quest for maximal control, one has created a controlled set of conditions similar to a university setting, and the experiment no longer meets the "on-farm" objective.
For example, assume a researcher wants to evaluate a new method for insemination of sows. Since the goal is to know if this new method has any chance of working, the best inseminator on the farm is selected to mate every other sow using the new procedure. Only data from sows mated by this superior technician will be considered in the experiment. This experiment can provide a reliable conclusion about whether the new method is efficacious. However, the researcher's approach does not evaluate whether the new method is efficacious under commercial conditions. In a normal commercial setting, several different people with differing abilities will be mating sows. In this example, the researcher's desire to control all confounding variables (the inseminator in this example) created conditions that did not mimic on-farm operations.
Conduct of on-farm research is a constant balancing act between implementing principles of scientific inquiry and maintaining the characteristics of commercial production.2 Often, constraints in facility design, economics, and labor availability force one to compromise some of the principles of scientific inquiry. The investigator needs to judge whether these compromises will generate unreliable conclusions.
Willingly commit labor and financial resources
Properly conducted on-farm research requires time to allot animals to treatments, impose treatments, collect data, and summarize data. When feasible, one person or small group of employees may be assigned primary responsibility for imposing treatments and collecting data. Fully employed members of the labor force in a commercialproduction unit cannot be expected to perform their regular duties and take on the additional duties required to conduct a research project. This challenge may be addressed by increasing the size of the labor pool or by relieving some workers of lower priority duties during the period of the experiment. Either option has associated costs.
Pay attention to details
A properly designed experiment with a detailed protocol must be implemented without taking shortcuts or cutting corners. Deviating from the designed protocol introduces variables into the experiment that were not anticipated by, and may be unknown to, the investigators. Sometimes, conditions created by external forces such as disease, inclement weather, or unanticipated animal responses dictate a change in the protocol. A clear and honest discussion of the required changes and the reason for the changes among barn staff and investigators is necessary to ensure that the integrity of the experiment is maintained.
Enhancing success of on-farm trials
Developing a concisely written, detailed protocol
The protocol is the "official" set of instructions for conducting the experiment. Protocols are just as critical to the success of a detailed drug approval trial18 as they are to an investigation of an extensive production practice in the third world.1 The protocol may be the only source of information about conduct of the experiment when the investigator is not present or available to answer questions. Consequently, all important aspects of the experiment need to be described in the protocol. A protocol should include objectives, description of treatments, method of treatment allocation, type of data to be collected and frequency of collection, procedure for analysis of data, and contact information for the investigator. Collect only data pertinent to the experiment's objectives.1 Avoid the temptation to collect too much data, which may cloud the objective of the experiment and strain the patience of the barn staff. While detail is important to ensure proper execution of the experiment, most barn workers will not read a lengthy, intricate document. Therefore, focus on covering all the important points in a concise, user-friendly format when writing protocols.
All personnel involved in the experiment should sign the final protocol indicating that they have read and understand the methodology of the experiment. This is best done after the researcher(s) and the barn personnel meet to thoroughly discuss the protocol. Personnel who sign the final protocol are more likely to be conscientious about implementing it and can be held accountable for their actions or inactions.
Impose all treatments during same time period
Statistically valid comparisons among treatments can best be made if all treatments are imposed during the same period of time. Some investigators impose a treatment on the entire herd, then use a pre-treatment period as the control. This approach confounds the treatment with time. One cannot determine whether a biological response observed after the treatment was applied is associated with the treatment or some other factor(s) that coincidently changed between the pre-treatment and treatment periods. For example, any improvement in sow performance observed after a new feed additive was included in the diet could be associated with the feed additive, a greater proportion of third to fifth parity sows, a new farrowing house manager, or any other factor that changed concurrently with introduction of the new feed additive.
Ensure that workers are supportive
The workers responsible for imposing treatments and collecting data must see the value of conducting the experiment. There is widespread agreement among researchers that if on-farm research is to be successful, producers and barn staff must embrace the project.1,2,8,9 The investigator usually cannot be on site every day, so primary responsibility for carrying out the protocol rests with the barn workers. If they do not see value in the extra work required to conduct an experiment, they are unlikely to do a good job implementing the protocol. Imposing an experiment on a farm and work force that has not "bought into" the idea usually is a recipe for failure. The best candidates for on-farm research are farms where the entire work force continually strives to improve the farm's production and is attentive to details. These workers view an experiment as a route to improvement. During the design phase of the experiment, a dialogue with the farm staff often uncovers useful, more worker-friendly ways of conducting the trial. This dialogue and use of the workers' suggestions, when feasible, will help gain the support of the workers for the experiment. Researchers should be sure to share progress and results with the work force to keep them connected to the research effort.
Monitor data collection regularly
The investigator or a trusted technician with research training must be at the farm on a regular basis to monitor data collection.1,18 The frequency of these visits depends on the nature of the data being collected and the abilities of the workers. Barn staff on commercial farms generally have limited experience conducting experiments, because they focus on management practices to improve productivity. Therefore, they may spontaneously impose new management practices to improve production without understanding the effects of these practices on the experiment.
For instance, production workers may not understand the importance of an experimental unit. In a lactation feeding trial, sow and litter is often the experimental unit. If workers decide to remove the partition between adjacent farrowing crates in the last 2 or 3 days of lactation, hoping to lessen stress at weaning, the integrity of the experimental unit is lost for litter weaning weight. This may be a very reasonable practice in commercial production, but it may have huge negative effects on an experiment.
Check data integrity
Investigators must check the integrity of the data reported to them throughout the experiment to identify problems and implement corrective measures as required. If one waits until the end of the experiment, it may be too late to correct the problem. Data integrity checks give everyone increased confidence in the end result of the experiment.
One way to conduct a check of data integrity is to record the same information in two different ways. For instance, record daily sow feed intake at the farrowing crate and total weight of lactation feed delivered to the unit. Theoretically, the sum of feed offered to sows in the farrowing crates should equal the total amount of feed delivered to the unit during the same period. Of course, one needs to account for feed wastage and carryover feed in bins. Another example would be to record number of pigs born alive per litter, pigs transferred in and out of litters, pig deaths, and numberof pigs weaned. Number of pigs born alive minus pig deaths plus or minus pig transfers should equal number of pigs weaned per litter.
Blind treatments to barn staff
Barn staff may knowingly or unknowingly have a preference for a particular treatment and should be blinded to the treatments being imposed.18 As a result of their bias, they may think they see a biological response to treatments when really there is no response. This is particularly important if the workers are asked to record subjective data such as scouring scores, condition scores, or other similar measurements. Barn staff and coordinators should not be permitted to look at or summarize performance of animals while the experiment is underway, as this involvement may bias the experiment. Assigning nondescript labels to all treatments eliminates any bias the workers may harbor. Blinding treatments is not always possible. For example, a feed additive based on herbal supplements may have a characteristic aroma that betrays any attempt to blind the treatment.
On regular visits to the farm, observe the pigs' condition and behavior. Are they responding to treatments as expected? If not, why? This is important, because the investigator cannot rely solely on the data collected to determine whether the experiment was a success. We recently conducted a sow lactation trial on a large commercial farm. The barn records showed that sows in mid-lactation were consuming in excess of 9 kg of feed daily, but sows became excited and agitated when we entered the farrowing room, behaving similarly to limit-fed gestating sows. We learned that the amount of feed the workers thought they placed in the feeder was significantly less than the amount recorded on the feed sheet, because workers used a volumetric approach to measure feed. This problem was not apparent without observation of the sows.
Beware of volumetric feed measures
Most commercial farms are not equipped with tools to capture weight of feed offered to individual sows or pens of pigs. The costs of equipment or labor or both to weigh feed for individual sows or pens of pigs are high and are not practical in commercial settings, so investigators must rely on volumetric measures. Volumetric feed drops or feed scoops must be calibrated regularly to ensure that a 5-lb drop or scoop truly offers 5 lb. Changes in season, bulk density of feed, workers, and other factors may influence the amount of feed provided to pigs. If decisions are being made on the basis of feed efficiency or cost of feeding, then one should invest in systems that provide gravimetric measures of feed intake.
Be aware of evolution in the production system
The primary reason for conducting on-farm research is to determine the efficacy of treatments under commercial conditions. However, commercial conditions change over time. Health status, genetic base, parity structure, pig flow, herd manager, labor force, or facilities may change during the course of the experiment. Some changes simply happen, with little opportunity for intervention by owners or managers. Other changes are imposed in response to economic forces. Investigators usually have little ability to influence these changes. Consequently, they must be aware of the changes and record them so that they can be considered when the results of the experiment are interpreted. Changes in a production system should be recorded in a daily log of events. This log should include changes that affect the entire unit (eg, changes in genetics, vaccinations, and feed supplier) and changes that directly affect the experiment (eg, power outage in a specific room, localized disease outbreak, mix-up with treatment labels).
Keep treatments simple
On-farm experiments should include a minimum number of treatments that are simple to implement.1,2,9 A small number of treatments, usually a control and an experimental treatment, allow the maximal number of replications per treatment within the number of animals available for the experiment. Treatments that are easy to implement are more likely to be imposed willingly and accurately by the barn workers. If the treatments are not imposed properly, resulting data is meaningless.
Consult a statistician before the experiment starts
A statistician or other professional knowledgeable in experimental design and analysis must be consulted while the experiment is being designed to ensure that a valid statistical analysis of the data collected can be completed.14,19 Aaron and Hays11 suggested that a consulting statistician with training or interest in swine production would be most likely to provide useful advice. Existing layout of feed bins and lines or pig flow or penning arrangements may prevent the ideal allotment of animals to treatment. Often, a statistician can help design an allotment scheme that creates minimal disruption of the production unit while maintaining the ability to make valid comparisons at the end of the experiment. Statistical analysis of data after the experiment is designed and completed cannot overcome a poor experimental design. Consulting statisticians or researchers knowledgeable in experimental design are available at every land-grant university in the United States. Investigators should contact the swine specialist in the extension service of their state's land-grant university for help in identifying a statistician.
Use proper statistical procedures to analyze data
A statistician should be consulted for analysis of the data.18 Preferably, the same statistician should be consulted during the design and analysis portions of the experiment. A fact sheet is available to help investigators conduct a simple, statistically valid analysis of an experiment with two treatments.12 Simply calculating the average litter size weaned by the control and treated sows or the overall average daily gain of control and treated pigs for comparison will not provide valid conclusions. A formal statistical analysis will provide the investigator with confidence that the observed differences were true biological differences and not simply due to random chance. The statistical analysis may also identify differences among treatments that were not apparent in a simple comparison of overall averages.
Evaluate performance using more than one response criterion
Researchers should collect data concurrently on a selected group of related variables. For instance, collecting information on weight gain, feed intake, and feed efficiency is helpful in determining whether a statistically significant response is biologically significant. A statistically significant improvement in daily weight gain without an improvement in feed intake or feed efficiency should cast the biological significance of improved weight gain in doubt. In contrast, improved weight gain coincident with increased feed intake gives the investigator increased confidence that the response is real.
- If conducted properly, on-farm research provides valuable information on the efficacy of new technologies in commercial production systems.
- Improperly conducted experiments are misleading and may encourage producers to implement management practices that do not generate an economic return or may even be detrimental to biological production.
- Retrospective analysis of data may identify associations among variables of interest but provides no evidence for a causal relationship between one or more manipulated variables and a production response.
- Communication with barn workers through concise protocols and carefully designed data collection procedures is crucial to conducting a successful on-farm experiment.
- Advice of a statistician or professional who is knowledgeable in experimental design and analysis is invaluable in designing and conducting an on-farm experiment that will generate valid, meaningful conclusions.
This publication has been supported in part by the Minnesota Agricultural Experiment Station, St Paul, Minnesota.
References - refereed
1. Hildebrand PE, Russell JT. Adaptability Analysis: A Method for the Design, Analysis and Interpretation of On-Farm Research-Extension. Ames, Iowa: Iowa State University Press; 1996:164.
5. Polson DD, Marsh WE, Dial GD. Population-based problem solving in swine herds. Swine Health Prod. 1998;6:267-272.
6. DeVries A. Statistical process control charts applied to dairy herd reproduction [PhD thesis]. St Paul, Minnesota: University of Minnesota; 2001:4-19.
8. Edwards-Jones G. Should we engage in farmer-participatory research in the UK? Outlook Agric. 2001;30:129-136.
9. Pervaiz A, Knipscheer HC. Conducting On-Farm Animal Research: Procedures and Economic Analysis. Morrilton, Arkansas: Winrock International Institute for Agricultural Development and International Development Research Centre; 1989.
10. Sumberg J, Okali C. Farmers' Experiments: Creating Local Knowledge. Boulder, Colorado: Lynne Rienner Publishers; 1997.
11. Aaron DK, Hays VW. Statistical techniques for the design and analysis of swine nutrition experiments. In: Lewis AJ, Southern LL, eds. Swine Nutrition. 2nd ed. Boca Raton, Florida: CRC Press LLC; 2001:881-901.
13. Steel RGD, Torrie JH. Principles and Procedures of Statistics: A Biometrical Approach. New York, New York: McGraw-Hill Book Company; 1980.
14. Gill JL. Design and Analysis of Experiments. Vol 1. Ames, Iowa: Iowa State University Press; 1978.
15. Berndtson WE. A simple, rapid and reliable method for selecting or assessing the number of replicates for animal experiments. J Anim Sci. 1991;69:67-76.
16. UCLA Department of Statistics. Power calculator. Available at: http://calculators.stat.ucla.edu/powercalc/. Accessed June 16, 2003.
17. Brant, R. Power/sample size calculator. Available at: http://www.health.ucalgary.ca/~rollin/stats/ssize/n2.html. Accessed June 16, 2003.
18. United States Department of Health and Human Services. Guidance for Industry #85: Good Clinical Practice. Food and Drug Administration Center for Veterinary Medicine; 2001. Available at: http://www.fda.gov/cvm/guidance/guide85.pdf. Accessed June 2, 2003.
19. United States Department of Health and Human Services. Target Animal Safety Guidelines for New Animal Drugs, Guideline 33. Food and Drug Administration Center for Veterinary Medicine; 1989. Available at: http://www.fda.gov/cvm/guidance/guideline33.html. Accessed June 2, 2003.
20. Carter SD, Hill GM, Mahan DC, Nelssen JL, Richert BT, Shurson GC. Effects of dietary valine concentration on lactational performance of sows nursing large litters. J Anim Sci. 2000;78:2879-2884.
21. Johnston LJ, Pettigrew JE, Rust JW. Response of maternal-line sows to dietary protein concentration during lactation. J Anim Sci. 1993;71:2151-2156.
22. Cromwell GL, Hall DD, Clawson AJ, Combs GE, Knabe DA, Maxwell CV, Noland PR, Orr DE Jr, Prince TJ. Effects of additional feed during late gestation on reproductive performance of sows: A cooperative study. J Anim Sci. 1989;67:3-14.
23. Knabe DA, Brendemuhl JH, Chiba LI, Dove CR. Supplemental lysine for sows nursing large litters. J Anim Sci. 1996:74:1635-1640.
24. Touchette KJ, Allee GL, Newcomb MD, Boyd RD. The lysine requirement of lactating primiparous sows. J Anim Sci. 1998;76:1091-1097.
25. Koketsu Y, Dial GD, Pettigrew JE, Marsh WE, King VL. Characterization of feed intake patterns during lactation in commercial swine herds. J Anim Sci. 1996;74:1202-1210.
26. Leibbrandt VD, Johnston LJ, Shurson GC, Crenshaw JD, Libal GW, Arthur RD. Effect of nipple drinker water flow rate and season on performance of lactating swine. J Anim Sci. 2001;79:2770-2775.
27. Yang H, Pettigrew JE, Johnston LJ, Shurson GC, Walker RD. Lactational and subsequent reproductive responses of lactating sows to dietary lysine (protein) concentration. J Anim Sci. 2000;78:348-357.
28. Johnston LJ, Ellis M, Libal GW, Mayrose VB, Weldon WC, NCR-89 Committee on Swine Management. Effect of room temperature and dietary amino acid concentration on performance of lactating sows. J Anim Sci. 1999;77:1638-1644.
29. Hill GM, Mahan DC, Carter SD, Cromwell GL, Ewan RC, Harrold RL, Lewis AJ, Miller PS, Shurson GC, Veum TL. Effect of pharmacological concentrations of zinc oxide with or without the inclusion of an antibacterial agent on nursery pig performance. J Anim Sci. 2001;79:934-941.
30. Brumm MC, Ellis M, Johnston LJ, Rozeboom DW, Zimmerman DR, NCR-89 Committee on Swine Management. Interaction of swine nursery and grow-finish space allocations on performance. J Anim Sci. 2001;79:1967-1972.
31. Owen KQ, Nelssen JL, Goodband RD, Tokach MD, Friesen KG. Effect of dietary L-carnitine on growth performance and body composition in nursery and growing-finishing pigs. J Anim Sci. 2001;79:1509-1515.
32. Wolter BF, Ellis M, Curtis SE, Parr EN, Webel DM. Feeder location did not affect performance of weanling pigs in large groups. J Anim Sci. 2000;78:2784-2789.
33. Wolter BF, Ellis M, Curtis SE, Parr EN, Webel DM. Group size and floor-space allowance can affect weanling-pig performance. J Anim Sci. 2000;78:2062-2067.
34. Mavromichalis I, Hancock JD, Senne BW, Gugle TL, Kennedy GA, Hines RH, Wyatt CL. Enzyme supplementation and particle size of wheat in diets for nursery and finishing pigs. J Anim Sci. 2000;78:3086-3095.
35. Brumm MC, NCR-89 Committee on Management of Swine. Effect of space allowance on barrow performance to 136 kilograms body weight. J Anim Sci. 1996;74:745-749.
36. Randolph JH, Cromwell GL, Stahly TS, Kratzer DD. Effects of group size and space allowance on performance and behavior of swine. J Anim Sci. 1981;53:922-927.
37. Wolter BF, Ellis M, Curtis SE, Augspurger NR, Hamilton DN, Parr EN, Webel DM. Effect of group size on pig performance in a wean-to-finish production system. J Anim Sci. 2001;79:1067-1073.
References - non refereed
2. Anderson MD, Lockeretz W. On-Farm Research Techniques: Report on a Workshop. 1991. Institute for Alternative Agriculture Occasional Paper Series No. 1.
3. Barban J. Statistical Process Control. Proc AD Leman Swine Conf Workshop. St Paul, Minnesota. 2001;1-11.
4. Dial GD, FitzSimmons M, BeVier GW, Wiseman BS. Systems approaches for improving the productivity of the breeding herd. Proc AD Leman Swine Conf. St Paul, Minnesota. 1994;21:84-93.
7. Deen J. Using statistical process control in swine production. Proc North Amer Vet Conf. 1997;11:987-988.
12. Reese DE, Stroup WW. Conducting Pig Feed Trials on the Farm. 1992. University of Nebraska Cooperative Extension Bulletin EC 92-270-B.