The relationship between the two processes of thought known as deduction and induction requires, for its explanation, preliminary general definitions of the two terms. Broadly speaking the term 'deduction' has two synonyms; 'analysis' and 'mediate inference' (mediate inferences are those conclusions implied by a combination of propositions). More specifically:-

Deduction is that process whereby we derive what seem to be 'new' propositions from the combination of two other propositions - the one of which is most often general (or universal) and the other of which brings a particular case under this general rule. Hence there must be a common factor, or middle term of some sort in all deductive reasoning by mediate inference. Deduction, however, also refers to the processes of immediate inference, whereby the implications of the data within a single proposition are unfolded.

Induction is in a sense opposed to deduction as a whole in that it starts with isolated, fragmentary and seemingly discrepant instances -or particular propositions - and from these it builds the general propositions and universal principles from which deduction starts, When we consider induction's methods of achieving this it is apparent that it could not proceed without deduction - neither could deduction begin without induction having been applied first to provide the 'universals'. So induction actually employs the deductive process within itself.

Hence it can be seen that 'induction' and 'deduction' are not opposed in any real sense - they 'cover the same ground' - but the difference is that they begin at different starting points, and hence proceed in 'opposed' directions. Deduction reasons from 'general to particular' and induction from 'particular to general'. I may arrive at the generalisation "All men are mortal" either by enumeration (synthesis) or by analysis of one or a few cases of human beings. The former process lays the proposition open to the charge of sterility since it can mean no more than "All men (so far as they have been enumerated) are mortal".

By way of illustration: Consider the statement "In modern times all wars have contributed, throughout their first months, to the increased economic prosperity of the successful aggressor." This generalisation has been arrived at inductively - by the examination of various instances of countries at war. Obviously the observation has required a good deal of deduction in order to decide that economic prosperity occurred as stated.

Deduction would be used in considering some new war - if the general statement is true in all cases of war, then we would have been able to deduce that this new war will also produce the same economic factors.

Propositions that have been arrived at by deduction are not 'new' in the same way as those discovered by induction. The former only seem 'new' to the person who did not realise that the knowledge was contained already in the universal proposition. It is induction that provides the general propositions, however, and so produces a substantial advance or newness in knowledge.

In general induction never obtains a complete proof. All that the conclusion it arrives at can be said to state is that "this is true of all instances hitherto observed, and there is high probability it will be true of all future instances". In other words, induction is incomplete enumeration where we know that all existent or possible future cases have not been taken into account. We generally, for practical purposes, extend this generalisation to all cases without exception - until at least one contradictory instance is found.

Induction, when combined with analysis, assists us in analysing the conditions of the given but it is from those conditions - not the mere plurality of instances, that the inferences of analysis are made.

The data of experience are concrete and exceedingly complex. Science assumes that each element in this complexity is the necessary expression of universal law.

To discover any one of them it is necessary to isolate it from the rest in thought. The irrelevant details which observe these essential elements must be put on one side. The process by which we achieve this separation or knowledge of the exact relation is a process of analysis. It is in relation to such analysis that a plurality of instances is advantageous as the form of any universal law is generally more clearly seen when comparison of instances enables the observer to disregard the inessential elements.

The process of analysis of phenomena is conducted, after observation of the facts in question, by formation of an hypothesis (a conjecture or supposition of what the universal factors or elements are).

Having framed an hypothesis it is necessary to test it to see if it agrees completely with known reality. It is this process of testing of hypothesis which gives rise to the distinction between direct and indirect induction - which are the two methods by which it may be tested.

This consists of the following basic steps:-
1. initial observation of facts to be studied
2. formation of hypothesis - aided by enumeration or analogy
3. testing of hypothesis by observation or experiment - or both (such as by Mills Methods or statistical method) and
4) acceptance, modification or rejection of hypothesis.

INDIRECT INDUCTION consists in the following basic steps:-
1. initial observation of facts to be studied
2. formation of hypothesis - aided by enumeration or analogy
3. deduction of consequences of this hypothesis (possibly involving quantitative or mathematic calculation) to arrive at a test-implication
4. testing of the deduced implication by analysis of phenomena -and acceptance, modification or rejection of hypothesis.

Indirect induction, because it involves both induction and a deductive phase, is commonly known as 'hypothetical-deductive' method. Induction is the term applied to the method of discovering knowledge about matters of fact - or, in other words - about the material world around us, both that part of it which our sense perception allows us to perceive and that part which we cannot perceive other than as an abstraction in our thoughts. Induction's aim can be said to be, in short, 'the establishment of the general (or universal) proposition -which, when of great enough generality and when well-established by investigation, comes to be considered a 'principle' or in a more familiar term, a "Law" (such as the Laws of Motion, Gravity, etc.). The 'generalisations' which induction sets out to establish are primarily concerned with the 'essential' character of phenomena, which means to say those attributes which make a thing what it is and without which it would not be what it is. So induction consists in the inference of general laws'. To explain this further an example is helpful:-

Supposing a farmer finds that an increase of labour and capital expenditure on a particular field brings in a proportionately lesser return each year. Prom this, the farmer may often draw the conclusion that this happens in all similar cases and apply this experience to other parts of the farm. This is a 'rule of thumb' - and so rather unscientific generalisation. The trained economist, however, will formulate the generalisation with greater exactness into a law. He may observe many instances and apply deductive and inductive reasoning to establish such a law - which he may express in the more-or-less 'abstract' way as follows:- "After a certain point, other things not changing, the returns from any part of land will decrease with successive applications of labour and increased expenditure." Such a statement is a 'generalisation' and has been inferred from the facts by successive applications of induction and deduction, synthesis and analysis. When such a law is formulated and tested by comparison again with the facts, induction then attempts to relate this generalisation to other generalisations - such that an entire body of inter-related generalities (often known as 'laws') can be brought forth in the mind. This 'knowledge' is the aim of induction.

Methods of Direct Induction

The methods discussed here are auxiliary to the hypothetical-deductive or 'indirect' method of induction. They are more analytical than synthetical in nature and they rely on observation and deductions about those observation. They do not serve to develop deductive systems at a higher level of generality from which test implications are calculated. This is what is here termed 'indirect' or hypothetical-deductive method proper.

J.S. Mill (1806-1875) formulated a number of methods intrinsic to scientific research. These have been variously modified, supplemented and reformulated since Mill's time so as to preserve their insights yet bring them more into line with empirical theory and practice. Such derived formulations of Mill's methods follow here:-

Method of Agreement

When observation shows that two events accompany one another, the probability that thev are causally connected increases with the number of varied circumstances under which they are observed.
The chief function of this method is to suggest hypotheses on the basis of varied observations. The method cannot provide adequate empirical verification for exact scientific standards. It can be symbolised as:-
A BC ------(accompanied by) ------- a b c
A D E -----(accompanied by)------- a d e
therefore A -(accompanied by) ------ a (?)

Method of Difference

When the introduction of an agent to any two sets of identical circumstances is followed by the appearance or disappearance of a certain event, it is likely that agent and event are causally connected.
This is the essence of experimental method. The agent introduced may not be the only cause of any change that ensues. Collateral but unobserved changes may arise, being the real causes. The factor introduced may also simply release a previously existing balance of forces. For example, removing the brake from a train on an incline releases counterposed forces, but is not the cause of movement itself. It may be symbolised as:- . therefore

When this method is supplemented by the observation of negative instances, its results become more secure through establishing - in addition to 'If A, then X' - the negative instance: 'If not-A, then not-X'. This amounts to demonstrating: 'Both if X, then A and if A , then X'.

It may be symbolised as:-
A B C (x1,x2,x3...) -------------> E (y1, y2....)
    B C (x1,x2,x3...) -------------> non-E (y1, y2....)
therefore A ------------->E            (?)

Joint Method of Difference and Agreement

Whatever is present in numerous observed instances of the presence of any event and absent in observed instances of its absence, is probably causally connected with it.
This method applies both to natural observation and experiment. It accounts for negative as well as for positive instances. If the circumstances are fully known beyond reasonable doubt, the probability of its providing scientific proof is very high, not least because it virtually eliminates the possibility of there being a plurality of causes. It can be symbolised as:-

S B J F (x1,x2,x3...) -----------------> E (y1,y2...)
S B L N (x1,x2,x3...) -----------------> E (y1,y2...)
S L M F (x1,x2,x3...) -----------------> E (y1,y2...)
S J K T (x1,x2,x3...) -----------------> E (y1,y2...)

therefore S -----------> E      (?)

The Method of Concomitant Variation

Any event which varies quantitatively whenever another event varies by corresponding quantities is likely either to be directly or indirectly connected causally with it, provided that other circumstances bear no correspondence to it or vary independently from it.

This is a quantitative method, usual in statistical analysis such as is used in physics and most other natural sciences and which, in the social sciences is usually regarded as being part of the so-called 'multi-variable analysis' method. It is capable of establishing empirical generalisations, but it cannot demonstrate causal connections with any degree of adequacy. The degree of likelihood of this being so, however, can increase when this method is combined with the 'joint method of agreement and difference'. Concomitant variation can be symbolised as:-

1A 0B 2L 3J (x1,x2,x3...) -----------------> 1E (y1,y2...)
2A 0B 1L 1J (x1,x2,x3...) -----------------> 1E (y1,y2...)
3A 4B 0L 1J (x1,x2,x3...) -----------------> 1E (y1,y2...)

therefore A -----------> E      (?)

The Method of Residues

Subtraction from a complex event of what is already established to be the effect of certain antecedent events leaves a residue of the event, which is likely to be the effect of the remaining antecedents.

This method is essentially quantitative. It was used in such well-known discoveries such as that of the existence and trajectory of the planet Neptune, and the determination of the velocity of light and the discovery of radium in pitchblende by Madame Curie. It is applicable only to complex events where most of the operative causal relations are already well established. As a way of indicating possible causes of observed phenomena it can provide hypotheses for testing by indirect 'hypothetical-deductive' method.
Given only A causes B and
only C causes D
and only E causes F
while it is known that A C E G causes B D F H
then G causes H (?)

Statistical Method

What is generally know as statistical method (i.e. not being one of Mill's methods as such) is really a comprehensive method of enumeration, one which takes note of exceptions and makes observations cover as wide a field as possible. It is used where the complexity of events makes other methods impossible in practice.

The principle steps in statistical method are:-
1) Collection of material (data), which involves either observation, counting or measurements of the relevant facts to the matter in hand. These should be gathered from as wide a field as possible to ensure their greatest possible representativity of the field's individuals. Questionnaires, interviews, and counting of instances by direct observation can provide the data.

2) Classification, tabulation and correlation of material (data) is carried out according to the investigator's aim. The use of tables, diagrams and graphs are useful in giving an overview of relations discovered in the processed data. The tables are thus 'summarised' by the aid of various averages and co-efficients of association and correlation, such as those of 'reliability' and 'verifiability'.

3) Critical Interpretation of the correlated data. The methods of interpretation are partly rational, partly purely mathematical... depending upon the nature of the study in hand. The possible sources of error in statistical analysis are very many, yet even when all have been considered and may be regarded as being eliminated, statistical analysis cannot establish causal relations. The results of high correlation between variables - or sets of variables - can always later prove to be due to unobservable causes. Further, correlations can frequently be shown between events which it would be ridiculous to consider as being related in any way. This weakens the inductive force of argument of the statistical method. Inductive process: this step follows the first two above (which are essentially descriptive) and relies upon the theory of statistical inference, which is bound up with probability theory. As an inductive princess, statistical analysis can provide the basis of predictions indicating what may (but not what certainly will) happen in the future or in cases sufficiently like those studied.

The role of deduction in science: axiomatic systems

Perhaps the chief use of deductive processes in science is represented by mathematics. Pure mathematical systems are based upon definitions known as 'axioms'. A mathematical system is thus an axiomatic system, i.e following certain rules for its construction, whereby many statements can be derived from a few axioms. The axioms and what can be deduced from them are regarded as certain (non-hypothetical), provided they hare been properly calculated.

In a hypothetical-deductive system, only observational statements and generalisations about observations (i.e. hypotheses or 'theories') are uncertain in principle. Insofar as mathematical statements are based upon observational statements (such as a statement about a quantity observed) or upon theoretical statements (such as E = mc2), they are thus also uncertain in principle. Thus, it is only the sort of deductions that serve to calculate the implications of an hypothesis (or so-called 'test implications') which in principle can be regarded as certain. They have the logical form:-
"If observations 1, 2, 3, 4, n... are true, then observation x must be made" (under given conditions).

Enumeration and Analogy

There are no definite rules for the formation of hypothesis - but the enumerative process often serves as a good pointer.
An enumerative generalisation is a general statement based on the counting of instances, all of which have been found to agree in some'. Respect. Because things are similar and dissimilar - or have certain common properties - we tend to group them together in our minds. The first similarity that strikes the mind is that of external observable resemblances of which the instances may be counted. Such a collection of instances suggest - but do not prove - that this generalisation may be extended to all cases without exception. This is shown by the following:-
We have observed that certain individuals a,b,c.d have the quality P and we have classed a,b,c,d under the class-name S. So we get:-
All a,b,c,d are P
All a,b,c,d are S
Therefore All S is P (i.e. every S is P)
This is an invalid argument (by 'illicit process'). The conclusion does not follow with necessity from the premises, so there is no formal ground for asserting this conclusion - but still its truth is possible.

Enumeration is the basis of generalisation - and therefore of all induction. The process of enumeration is aided by the class-names of ordinary speech - since many objects have already been classed for us. So every observed regularity of connection between phenomena suggests a question as to whether it is universal and this mode of inference is induction by simple enumeration.
Enumeration leading to the collection of statistics may be valuable if the unit selected is relevant and is systematically subjected to exhaustive observation. If the inference is based on the resemblances themselves, rather than the number of times they occur it is primarily inference from analogy or argument from analogy.

Analogy and enumeration are not two separate and independent processes.
Argument from analogy can be defined as an inference from one instance to another which resembles it in some respects.
In argument from analogy we do not start from the mere accumulation of instances but from their resemblance in certain respects - i.e. we do not merely count instances but consider their quality and character.
A simple example, from a well-known type of tree, A , one may argue about a lesser known type, B. The two trees resemble each other in the following positive points:-
1) average height
2) colour of summer leaves
3) structure of leaves
4) average length of life, and fail to resemble each other in respect of 1) climate where found. This last point is the negative analogy, the first four being the positive analogy. One may argue by analogy that there is a likelihood that a sixth characteristic, known to belong to tree type A, will also prove to belong to tree type B. The positive analogy supports this, the negative analogy weakens the likelihood again,

The force of the argument from analogy is conditioned by two chief factors:-
1) importance: the more highly relevant the phenomena involved (i.e. the essential or important ones) - the stronger the analogy.
2) comprehensiveness; the more phenomena bearing resemblances that are included, the stronger the analogy.
However, argument from analogy is not one in which the conclusion follows with necessity from the premises, for it may be true or false. Formally expressed;
If P is X (where X is the characteristic(s) common to both series of phenomena in analogy) and S is X
then S is P (invalid by undistributed middle term). Fallacies due to analogy easily arise through confusion over the relative importance of properties involved, which is often caused by use of ambiguous metaphors of figurative language that leads the thinker to neglect what seems trivial in favour of what seems striking.

For example, a book by G. Heard (1957) "Is another world watching us?" contains arguments by analogy that the sun is about to turn nova. These were that as the sun is a pulsing star or Cephid and our nuclear bombs. have increased the size of sunspots, these may be taken as signs of digestive trouble', just as spots on our face sometimes tell us about internal conflicts in the body. This is fallacious use of metaphor ('digestion') and confusion about which properties are' important/ relevant due to lack of knowledge about astrophysics etc.
Hypothetico-deductive Method ('indirect' induction)

As already indicated, the chief phases involved in the method of hypothetical- deduction are:-
1) observation
2) hypothesis formulation
3) deduction of the hypothesis' consequences so as to derive one or more test implications
4) testing of the hypothesis by final observation.
Enumeration and the methods here listed under direct induction can function both as ways of organising observational materials and of preparing for hypothesis formulation, in which analogies may prove fruitful. Other aspects of observation and of hypotheses are considered under separate headings in the following;-


All scientific knowledge rests basically on the accuracy of reports of the original observations (testimony), plus the accuracy of the observations themselves. Observation is the application of our sensory faculties to the accurate determination of events produced in the ordinary course of nature which are thus not under our conscious or technical control.
As defined, observation here excludes experiment, which goes beyond such natural observation to controlled observation. Simple observation is ordinary sense perception and so involves reference to what is already in the mind i.e. - it involves interpretation, inference and selective interest.
Fallacies in observation

Observation of a given phenomena is a process of extreme delicacy and difficulty. In the first place, errors are often due to idiosyncracy or the psychological make-up of the individual. The task is to exclude this. The following diagram arranges general aspects of fallacious observation into categories:-
Fallacies of observation
mal-observation mis-selection
mal-perception misinterpretation neglect of neglect of
(wrong sense data) of sense data instances operative conditions

Mal-observation by mal-perception occurs when the senses provide incorrect information about the surroundings. When, for example, the eye registers so-called "after-images" against a dark background after staring into a bright light, the sensory data are incorrect. Further, under a variety of physiological conditions, the faculty of sense is impaired or distorted such as in high fever, under the influence of drugs or alcohol, after injury to ears, eyes and so on.
Mal-observation by misinterpretation of sense data is 'wrong interpretation of our sense impressions'. This arises at various levels of observation. At the most basic level, we make what have been called 'instinctive inferences'. For example, seeing a ship on the horizon, we 'automatically infer' that it is a ship - unthinkingly comparing our visual impression with our memory or previous experiences. Even in this one can be mistaken. What at first was seen to be a ship may, upon clearance of the weather, be seen to be a distant headland. Optical illusions give rise to mal-observation at this level. Sometimes we find we are simply unable to interpret our sense impressions rightly. At a more sophisticated level, our conscious inferences can be false while being confused with instinctive inferences. In other words, we think we have made no conscious inference when we have done so. This can be accounted for by force of habit, prejudice of mind etc.

Misjudgement of meaning

Almost invariably, when mal-observation occurs, the error lies in the meaning given to the impression(s) and it is determined either by past experience or by inference. Human behaviour is particularly open to misinterpretation. When a person says something, we can often misinterpret by misjudging their actual thoughts or feelings. Human actions can moreover be so complex - have so many potential intentions, purposes, or even disguised motives - that interpretation of even quite simple acts can vary considerably between different observers. Especially in the psychological, social and human sciences therefore, mal-observation by misinterpretation is a major source of error, it being commonly much harder to understand a new meaning - say, in a way of being, a social custom, a foreign viewpoint - than to recognise a familiar one from one's own experience.

Mis-selection The fact that observation also depends on selection leads to two further possible traps - that we do not select what is relevant, or that we lay too much stress on what is trivial and incidental in trying to avoid the first error. Only abundant knowledge and the capability of the scientist in the relevant area enables a more or less correct appraisal of the facts.

Non-observation by neglect of instances

This is most liable to happen in the earliest stages of inductive enquiry when, by simple enumeration of instances an attempt is made to determine exactly what is the character of the phenomena to be explained.
1) A frequent source of the error is bias, observing only what is acceptable to our own private theories,
2) Another source is when attention is directed to positive instances only, and the negative instances are neglected (i.e. instances where
one would expect a phenomenon not to occur, but where it is observed nonetheless).
3) Another danger is the tendency to infer that because a phenomenon has never been noticed or observed in the past, that it is nonexistent.
4) When a proposition is accepted on merely negative evidence (i.e. the absence of otherwise-expected observables), care should be taken to make that evidence as complete as possible.
Non-observation by neglect or operative conditions. This happens most frequently at a later stage of the inductive process, when the analysis of phenomena has begun. Other causes or circumstantial factors than those under consideration may be operative due to inaccuracy of analyses. An analysis may leave out of consideration some essential element of the phenomena. Social and economic sciences must especially guard against this particular fallacy.
Scientific instruments. The use of scientific instruments which enable observation, by one or other of our senses, but which do not modify the object observed, is non-experimental. The instruments themselves embody much knowledge and have grown out of previous research to contribute greatly in observation. But their use is dependent for its accuracy and fruitfulness on the qualities of the observer. They extend the range of observation. Examples include spectroscopes, telescopes, radio-telescopes, microscopes, photographic plates, thermometers, balancing scales etc.
Hence we can see that such 'scientific instruments are advantageous for two reasons:-
1) They extend the range of observation (telescope etc. - ammeter)
2) They increase its quantitative exactness (scales, photography).


The sciences which rely primarily on testimony are the historical and to some extent, the psychological, sociological and political. Even the natural scientist frequently must rely upon testimony as to the results of other scientists' observations and experiments. Testimony is often the only source of historical fact. Likewise, personal opinion - whether on psychological, social or political questions -is a form of testimony. All testimony must be subjected to careful sifting - and hence requires the expert in whichever field is concerned to do this sifting. The 'sifting of testimony' is made necessary by the liability of observers to:-
1) fill out an incomplete story - fill out 'memory',
2) remember incorrectly the details when recording them
3) purposefully distort the truth for their own ends
4) misobserve in the first place
5) have been actively involved in what they report, and therefore be biased or not in possession of the full picture.
6) have failed to master the precise and unambiguous expression of what he would convey through written evidence.
Several witnesses' agreement on a point, provided collusion did not exist between them, provides a strong probability of the objective truth of the testimony.
Conflicting accounts must be carefully compared - as when we have several who agree and one who disagrees, The mere weight of numbers does not constitute proof, as crowds are liable to illusions as well as individuals.


Experiment is not merely observation under artificial and determinable conditions. We may define it as follows;-
Experiment is observation under determinate conditions (which constitute an integral part of the observational, involving the construction of a typical and crucial case on a plan thought out in advance in order to test a hypothesis.

The relation between observation in general and experiment best helps to describe the nature of experiment. Observation aims at a full and exact knowledge of all the conditions without which the phenomena observed would not occur. These conditions are not presented by nature in isolation but are overlaid with many elements which obscure them from view. Hence it is often that a purely mental analysis is insufficient to get rid of these 'extra' factors (which are not essential for the occurrence of the phenomena).

Here experiment comes to the aid of observation by supplying a 'means of (limiting or Isolating certain) natural conditions' (i.e. observation under test conditions). By isolation and combination of physical agents it can so manipulate them as to determine, in many cases, the conditions under which the phenomena to be examined occur, thus instances can be produced at leisure and particular phenomena can be isolated so that the kind of variations we require can often be obtained. Note that what has been termed 'natural experiment' refers to observation under varied circumstances over which we have no control (i.e. observations of stellar parallax, planet transits, eclipses, the fall of 'natural' meteors - such as the fall in 1805 in I'Aigle).

So-called 'negative experiment' establishes a fact on purely negative evidence. For example, Pouchet supposedly held that the 'spontaneous generation' of germs occurred in hay. He filled a bottle with boiling water, plunged this upside-down into a basin of mercury, wherein he introduced hay to the bottle after pure oxygen, the hay having been heated to 100 C. Generation of germs took place in the hay. Pasteur, however, showed that the mercury contained germs. This 'negative instance' or 'negative evidence' had been overlooked by Pouchet.
The aim of experiment is to eliminate all conditions which are not specifically operative in the particular case under consideration and either to strengthen or weaken the hypothesis concerned. One can only be sure that an experiment gives repeatable or invariable observations when it is established both
l) that the phenomenon (s) is present under the particular condition of the investigation (x) and
2) that the phenomenon S is absent under the absence of the particular conditions of the investigation (x).
In short; Both if S then also x and if not S then neither x.


Some important basic points about the nature and use of measurement in scientific investigation are:-
1) We never attempt to estimate absolute measurements - that is - all measurement is comparison with a standard unit - or all measurement is comparative.
2) All measurement rests, at bottom, upon personal observation by human beings. The degrees of accuracy may be very high - but we can never claim absolute precision. The accuracy of measurement is dependent on - and relative to - our powers to distinguish differences by our senses.
3) The more generalised a natural scientific law becomes, the more quantitatively precise it is - and so the physical sciences may be seen to advance with advances in measurement. To what extent this is true of the various human sciences is a matter of controversy, depending upon whether all qualitative phenomena can or cannot meaningfully be quantified.
4) Laws are more exact than our observations and experiments can ever be. Laws are suppositions - but sufficiently-verified ones - of what would happen to any element in a complex of phenomena if the other elements that make it up were removed. This is most often impossible.
5) Measurement lies at the basis of quantitative analysis and consists of the correlation of a property (of an object) with a number selected in a certain way.
6) Measurement depends on the transitive symmetrical relation of 'matching'.
7) The fundamental rules of measurement being:-
a) Two bodies matching a third (w.r.t. a given property) match each other
b) The addition of objects having the given property, increase that property in accordance with arithmetical laws
c) The addition of equals yields equals.


Hypothesis may be defined as a supposition made with evidence recognised as insufficient as to the universal law of relation which is exhibited in particular phenomena before us - or, such a law being empirically demonstrated, to account for this law by relating it to others.

In direct induction the former is generally the case, i.e. to guess at a causal law. In indirect induction (i.e. hypothetical deductive method) it is more often the latter) i.e. to account for the causal law itself. If on further examination the supposition (or hypothesis) proves to hold up to further observation, it becomes 'inter-subjectively verified'. Sometimes an 'unverified' hypothesis contains sufficient truth for exact modification of it, or of the laws with which it conflicts, so it can be retained. Otherwise it is rejected as 'unverified'.

Every hypothesis is a guide to further enquiry towards the ultimate goal of explanation, which has traditionally been regarded as the ideal of science. Whether this ideal can ever be reached, or whether it is a necessary focus for scientific theory is a question open to various philosophical views and is one which remains controversial. The process of suggestion of hypothesis is purely individual and does not admit of general rules. It .generally occurs to someone with a wide knowledge of the order of facts under investigation that they occur.

A fruitful hypothesis always incorporates past knowledge and arises out of it. Probably for this reason people of the same epoch often make the same discoveries simultaneously. Hypotheses which prove most fruitful stand in close relation to contemporary knowledge. Two ways in which hypotheses are sometimes arrived at are enumeration and analogy,

If an hypothesis proves, upon further observation, to account for these facts adequately it is retained as part of the respective science. One may say that it is 'inter-subjectively verified' if it gains sufficient acceptance in the scientific community to become an established generalisation. Whether it also is true is another question, though. Inter-subjective verification is not tantamount to objective verification, which latter requires certainty and universality of the statement. Whether this can be achieved at all is still a matter of debate within philosophy and science.

Many will assert with David Hume that no empirical generalisations can be regarded as known to be true and that, at best, one can say that there is a particular degree of likelihood of any hypothesis holding true in future, despite however many observations and applications an hypothesis has proven to be supported by in the past. Others have maintained that there is nothing indeterminate about physical reality (as Einstein held) and that therefore the laws of relativity and of classical physics apply universally each within their respective sphere of application. Immanuel Kant, for example, held that causal relations can be known to be true and universal.

Whatever the eventual outcome of this controversy, a general caution is widely accepted; that however well-established any hypothesis may be it should never be regarded as impossible of revision in principle, if it should prove necessary. In support of this it is held that induction is logically uncertain, which can be seen by considering the following hypothesis and observation:-
Hypothesis If profits are to increase, the prices must be raised, If A, then X
Observation Prices are raised X
Conclusion Therefore profits are increased Therefore A
But one can see that the 'proof is invalid. A does not follow with necessity from the premises. (A could be a consequence of the premise, but need not be so).

In order to establish that an hypothesis is true it would be necessary to show not merely that that particular supposition will explain the facts, but also that none other will. Symbolically, if our hypothesis is If A, then X, it could not be held to constitute logical proof until we also established If X, then A. Further, however, scientific proof would require the additional qualification of universality as in If A, then always X and never X without A. This raises Hume's problem of predictability: have we any guarantee that what has held true in the past will necessarily do so in future? Those who argue 'no' in answer to this tend to regard all scientific 'knowledge' as no more than well-informed supposition, even such repeatedly well-proven physical laws such as those of mechanics and gravitation. Those who nevertheless would argue somewhat otherwise tend to do so from experience, such as from the past extreme reliability and complete accuracy of the laws of mechanics and of gravitation within our planetary system, holding that this demonstrates empirically that, at least in some fields of science, we have certain or universal knowledge with 100% predictability as a consequence. No serious scientists, however, disagree that such universal and certain laws allowing predictions of the degree of determinacy of physics, are not known in the genuinely social or psychological sciences.

If a predicted consequence (or 'test implication') of an hypothesis is not observable, a different, logical situation appears to apply, as follows:-
If A, then X Eg:- If the patient has pneumonia he must have a temperature
Not X The patient has no temperature
Therefore not A. Therefore the patient does not have pneumonia
Provided that 'If A, then X' is interpreted to mean 'Whenever A occurs, then X occurs' (or else 'X always accompanies A'), we see that the above argument has a valid conclusion and thus constitutes conclusive proof. It would appear, logically, that an hypothesis can be falsified by the above procedure, i.e. that it can be rejected as certainly false. Unfortunately there are other cogent reasons why this is not empirically so.
In the above example, the observation is incorrect if the technique of measuring the patient's temperature has been at fault - say that the thermometer was removed too soon or that the thermometer itself was faulty, or the observer misread the temperature etc., then the hypothesis would not be falsified.

In principle one can never be certain that observations are carried out infallibly so as to falsify the hypothesis. This applies with greater force, the more demanding and complex the observational or experimental conditions are; the more delicate or advance the measuring equipment or experimental apparatus. Further, when the observation is a test implication that has been deduced from the hypothesis (as in hypothetical-deductive method), there is always a possibility that an error occurs in the deductive processes involved. These can of course, be most complex, whether purely algebraic, quantitative or semantical in nature.

Finally, even if many exact and reliable observations support an hypothesis and if no errors actually occurred in deduction, the possibility still remains, in principle, of negative instances being discovered that show the hypothesis not to be universally valid. Sometimes, a hypothesis can be modified so as to accommodate errors in its exact formulation. The discovery of negative instances etc. can lead to an improved reformulation of an hypothesis and constitute an advance in accuracy.

Extension of Hypothesis

If a general hypothesis is reliable when put to the test of observation, it will usually be possible to infer other test implication than those first developed so that other facts that have passed unobserved or which have not been explained so far can be brought under its generality. The history of science abounds with examples of such predictions and extensions. Galilean mechanics, Newton's theory of gravitation and Einstein's theory of relativity are prime examples. Relativity allowed predictions that could not be tested until many years after the theory was forwarded when sufficiently-precise measurement equipment had been developed. An example of the extension of a hypothesis is given in the following section, where Galileo extends his law of falling bodies to include those falling along inclined planes.

Hypothetical-deductive method exemplified in natural science

The first to use the method here described was Galileo (1564 -1642). He noted that "some superficial observations have been made, for instance, that the free motion of a heavy body falling is continuously accelerated, but just to what extent this acceleration occurs has not yet been announced…" (Dialogue Concerning Two New Sciences. Trans. Crew & de Salvio 1950. p.153-4).

This amounted to preliminary observations, yet Insufficient for any sort of conclusive proof. Galileo carried out further observations by dropping cannon balls from the various floors of the Tower of Pisa (which leaned even then)o This was insufficiently accurate observation to demonstrate the above 'hypothesis'. However, it strongly indicated that the Aristotelian doctrine that 'bodies fall with a velocity proportional to their weight ' was incorrect, since cannon balls of varying weight hit the ground simultaneously.

Galileo sought to explain the movements of bodies in terms of geometry and to show that there is correspondence between geometric axioms and the behaviour of bodies in motion. He proceeded to seek an exact expression of a regularity to which the motions of all vertically-falling bodies would conform. The hypothesis he discovered, in this case largely through mathematical speculation it would seem, went beyond the observations. It was embodied in the following definition:- "A motion is said to be equally or uniformly accelerated when, starting from rest, its momentum receives equal increments in equal times".

When challenged as to the likelihood of this 'definition' being correct (or, in current terminology, of the hypothesis being supported by observation), Galileo decided to seek a means of testing it against the facts of observation. This 'definition' did not go beyond the attempt at description of the behaviour of moving bodies in general. In other words, it did not postulate the cause of movement, nor an answer to the question 'why do bodies fall?'.

Galileo developed the proposition: "The spaces described by a body falling from rest with a uniformly-accelerated motion are to each other as the squares of the time-intervals employed in traversing these distances". Expressed in terms of modern algebraic equation, this states:- ½gt 2 where s = distance covered, t = time taken and g is a constant. By geometrical deduction, Galileo calculated how long it would take a ball to traverse any distance along an inclined plane of any angle. This use of inclined planes involved the additional assumption that bodies rolling down inclined planes conform to the same regularity of motion as do those falling freely and vertically. By obtaining a sufficiently mild incline it was possible - even with the relatively-primitive and inaccurate measuring equipment then available (such as pulsometers and water-clocks), to set up an experiment where the test-implications could be applied and the hypothesis be tested by observation. Note that the additional assumption made in order to find a way of testing the main hypothesis also amounts to an hypothesis. It can be called a supporting hypothesis or an auxiliary hypothesis, and can eventually be subjected to empirical testing by another experimental set-up, as Galileo in fact did, as will be shown.

The exact distance predicted to be covered in a given time with an inclined plane of any given angle of incline is known as the test-implication of this example. Galileo worked out various test implications for various angles and lengths of incline so as to re-test the hypothesis under varying conditions. In every case, repeated observations of each experiment showed that bodies did roll the predicted distance in the predicted time. Thus the hypothesis "the distance fallen by bodies on an inclined plane is proportional to the time taken" was supported by final observations and Galileo took the hypothesis to be validated (i,e. proven true). In order for the main hypothesis to be similarity supported, however, it was necessary to show that the auxiliary hypothesis was justified too.

The following test implication was tested:- "that the speeds acquired by one and the same body moving down planes of different inclination are equal when the heights of the plane are equal". Since this matched final test observations, the main hypothesis was considered validated, it being: "the distance fallen by bodies is proportional to the square of the time taken" (Note, this applies to all bodies that fall, vertically or otherwise).

Galileo even conjectured over the possible effects of friction on balls rolling along inclined planes and tried to measure this. However, his instruments were not accurate enough for positive results, despite attempts. So the experiments described were carried out with polished planes and as round and smooth balls as possible so as to reduce friction. It is chiefly with the above historical experiments as a model that the hypothetical-deductive method has been 'abstracted' from the study of scientific procedures. Galileo did not set up his method in a formal, step-by-step way that philosophers of science have done since, as has been referred to here.