Alan L Tyree

Legal Reasoning: the Problem of Precedent

Alan L Tyree, Graham Greenleaf and Andrew Mowbray

The DataLex Research Project

——————————————————

Abstract. Legal reasoning with precedent, although similar in some respects to reasoning by analogy and learning from examples, presents some special problems. There are specialised techniques for avoiding "unwanted" results. The number of examples available is usually too small for the normal induction algorithms and the problem does not appear to yield readily to rule based inferencing. We describe a method of knowledge representation, various means to test the adequacy of the representation and a method of inferencing which allows a form of induction from a sample space which is of cardinality smaller than the number of attributes used.

——————————————————

1. Introduction

Serious legal expert systems must be able to reason with case law, particularly in the area of legal precedent. It is characteristic of such reasoning that it is based on a very small set of decided cases. The DataLex Project has experimented with methods of building legal advice machines based on knowledge bases constructed only from the decided cases.

The problem has several unusual characteristics that make it difficult. First, the reasoning, although similar in some respects to reasoning by analogy, is different from that of any other domain. Secondly, the number of decided cases is usually small so that ordinary induction algorithms are useless. Finally, because of the form of the knowlege base, it is believed to be important to subject it to various tests to determine its "knowledge content". This paper describes the theories used in the context of a particular application.

2. The Nature of Legal Reasoning

The legal system of Australia, like that of other former British colonies, relies heavily on the legal doctrine of precedent. According to this doctrine, each and every case decided by the court becomes a part of the law itself, not merely an explanation which later judges may adopt or reject.

The danger in such an arrangement is that the law may become completely inflexible. In order to prevent this, there have been a number of devices developed to avoid "unwanted" outcomes. These devices include the obvious ones such as having regard to the seniority of the court which decides the case, but also less obvious ones which become a part of the legal reasoning process.

This form of legal reasoning is in some respects similar to the general problem of reasoning by analogy, but there are some specific and difficult differences. First, there is a deontic aspect, for even those cases which are not strictly binding on a court have some legal force. A second difference is that the process contains formalised methods for "distinguishing" past cases. A case is distinguished by noting legally significant differences from previous cases which may be used to justify rejecting the outcome of the previous case, at least insofar as its applicability to the problem at hand is concerned.

At one time it was thought that each decided case stood for a single rule of law which could be formulated in much the same way as a section of a statute, but this theory has been shown to be completely unworkable (Goodhart, 1959; Stone, 1959). It is now clear that any interpretation of the legal significance of a case must be in the larger context of the legal material in which it is embedded (Stone, 1985).

Perhaps for this reason, there has been little success in dealing with the general problem of reasoning with case law by means of the usual production rule formulation. It is not that it is theoretically impossible to write such rules, but that it is not the natural way in which lawyers reason with cases. If that is correct, then attempting to use the usual rule based systems will probably result in mediocre performance of the system and will certainly be wasteful of time in the knowledge engineering process.

Another way of viewing the problem of reasoning with case law is as a problem of extracting knowledge from a set of examples, the decided cases. This way of looking at the problem would seem to make it a candidate for inductive tree generators such as Quinlan’s ID3 algorithm (Quinlan, 1986). Unfortunately, the number of decided cases in any particular problem domain is usually far to small to make use of such algorithms. The methods proposed here are of general application and may be useful when there are too few examples for inductive tree generation.

3. Fundamental Role of Similarity

The approach adopted by the DataLex project begins with some simple observations concerning the reason for the existence of the doctrine of precedent. One of the functions served by the doctrine is that of certainty, for if a statute or other expression of law has gaps which allow for uncertainty, then it should only be necessary to consider the matter once. Following the decision on the matter, advice may be given with some confidence as to the current state of the law.

But the doctrine of precedent serves a more fundamental role, namely that of implementing a policy of simple fairness and justice in the legal system. One of the leading textbooks says this:

"It is a basic principle of the administration of justice that like cases should be decided alike."

There may be a number of different ways of implementing this basic principle, but in English law and those legal systems derived from it, the job falls to the doctrine of precedent.

The DataLex approach to the problem of reasoning with case law falls back onto this reason for the existence of the doctrine of precedent. If it is possible to formalise a measure which will assist in the identification of "like cases" then the measure could itself be used to assist in the reasoning process.

4. Knowledge Representation

It is still, of course, necessary to represent the case law in some form which may be manipulated. Here the DataLex project has placed an emphasis on the factual variables which appear in each case. A number of attributes are identified by the domain expert and the database of relevant case law is then coded according to the value of these attributes, again assigned by the domain expert. This choosing of the variables and the assignment of the values constitutes the major part of the knowledge input by the domain expert.

4.1 Sample Problem: the Finders Cases

As an example of the way in which the process works, the original research considered a domain in which the law is entirely defined by the decided cases.

The legal problem considered is that of a conflict of interests where one of the parties has found a chattel which is then claimed by the other disputant. In the prototypical dispute, the chattel has been abandoned or lost and the owner of it can no longer be found. The two claimants are, again prototypically, the person who has found the chattel and the owner or the occupier of the land on which it was found.

This legal domain is almost perfect for a project concerned with case law reasoning. This is so not only because the domain is entirely defined by the case law and because it is small, but also because it is far from simple. This is the area of law where one judge has said:

"These cases...have long been the delight of professors and text writers, whose task it often is to attempt to reconcile the irreconcilable."

It is easy to understand why the case law has become difficult and "irreconcilable". In the usual case both parties are seeking a windfall. Our sense of justice deserts us in such cases, for there is no clear general rule of fairness which would dictate awarding the windfall to one rather than the other of the claimants.

4.2 Vector Representation

In the finders cases, the domain expert identified ten attributes which were of legal importance in the outcome of the cases. A design decision was made at an early stage that the attribute values should be restricted to "yes/no" values since a numeric coding was anticipated.

In the initial studies, only eight cases were included in the knowledge base since these eight form the basis for much of the discussion of the law of finders in the leading textbooks (Tyree, 1977). The main contents of the knowledge base may thus be summarised in an eight by ten matrix of attribute values.

The cases used are the following, identified by a single letter for use in the later diagrams:

A: Armory v Delamire (1721) 1 Strange 505

B: Bridges v Hawkesworth (1851) 21 LJQB 75

C: Elwes v Brigg Gas Co (1886) 33 Ch D 562

D: Hannah v Peel [1945] 1 KB 509

E: London v Yorkwin [1963] 1 WLR 982

F: Moffatt v Kazana [1969] 2 QB 152

G: South Staffordshire Water Co v Sharman [1896] 2 QB 44

H: Yorkwin v Appleyard [1963] 1 WLR 982

4.3 Measuring similarity

Once the cases in the database have been represented as numerical vectors, there are a number of standard methods available for measuring the distances between them. When, as in the finders cases, the numbers are always 0 and 1, the usual Euclidean metric essentially counts the number of facts on which the two cases disagree. This is not a bad start, but it is clear to most experts that the factors chosen in the finders cases are not of equal importance. It would seem desirable to increase the experts input into the inferencing system by assigning weights to the various factors.

Weighted Distances

If this is done, then the Euclidean metric can be modified by weighting the coordinates of the vectors. The formula becomes:

D2 = ð((xi-yi)2*wi)

The special case of all weights being assigned the value 1 reproduces the Euclidean metric as a special case.

Rather than asking the domain expert to assign weights directly, we have chosen to extract weights which are implicit in the experts assignment of attribute values. If a single factor is considered along with its values across the database, it will be found that there is a considerable difference in the patterns formed. Thus, some attribute values are split more or less evenly between 0s and 1s by the database whereas for others a single value predominates.

Generally speaking, these attributes which have a single predominant value seem a poor way to differentiate among the cases in the knowledge base. One way of viewing this is that the expert included these factors for a very special reason and to grant it a higher weighting in the definition of the metric. Of course, the value of this approach can only be tested by empirical results, but our tests indicate that the performance of the system is improved by giving such a weighting to the factors.

A formal method of giving higher weights to these factors is to weight the factor by the inverse of the variance (the method is adopted from Kendall and Stuart, 1958). In the special case of numerical vectors having only 0 and 1 as values, this reduces to assigning a weight of 1/(Ave-Ave2) to the factor, where Ave is simply the numerical average of the factor taken across all cases in the database.

The distances so defined are shown in Figure 1.

ABCDEFGH

A0.018.737.724.132.342.433.729.1

B0.018.95.324.332.214.718.9

C0.013.65.331.914.919.2

D0.018.926.89.313.6

E0.037.220.324.5

F0.036.231.9

G0.04.3

H0.0

Figure 1: Squared distances

4.4 Testing the Knowledge Base

One of the salient characteristics of legal data is that there is usually a shortage of it. This means that normal methods of testing are likely to be of limited use. It simply is not generally feasible to split the data into two groups and to save one of them for testing.

In order to determine the degree to which the encoded vector based knowledge representation captures the legal knowledge, several tests may be conducted.

Minimal Spanning Trees

The concept of a minimal spanning tree (MST) is central to much of our work. If the set of encoded cases is considered as a complete graph, that is, one in which each of the nodes (cases) is connected with every other node, then a spanning tree is a connected subgraph which contains all of the nodes and has the property that it is free of cycles.

Further, if the distance between the nodes is considered, then a MST is one which has the shortest total edge length among all spanning trees (Sedgwick, 1983). It is easily seen that a MST need not be unique although of course the total edge length of each MST is the same.

There is a straightforward algorithm for building spanning trees which is based on the fact that if the nodes of a graph are split into two sets, then any MST must contain the shortest link which contains the two sets of nodes (Sedgewick, 1983). The proof is simple: assume that the shortest edge connecting the two sets is not included in a spanning tree. Add that edge to the subgraph. There must now be a cycle in the subgraph and the cycle must include the edge which has just been added. But this means that it must include an edge between the two sets of vertices which also belongs to the original spanning tree. Delete this edge. The resulting subgraph is a spanning tree with a shorter total edge length than the original.

18.7

A———-B

|5.3

26.8 | 13.6 5.3

F———-D———C———E

|

|

4.3 |

H———G

Figure 2. Minimal Spanning Tree

MSTs are interesting because they are fundamental in certain types of cluster analysis (Zahn, 1971). To illustrate how this works, consider the complete graph of the finders cases. A MST of the eight vertices is shown in Figure 2. If the largest length of the tree is removed, then the diagram splits into two sets of vertices. Splitting the resulting subgraphs in a similar fashion results in finer and finer categorization into groups. For a small number of points, the full process may be shown in a dendrogram such as Figure 3.

26.9 ______|_____

\/\ | |

19.9| ___________|_____ |

\/\ | | |

| | | |

| | | |

13.6| __________|_______ | |

| | | | |

| | | | |

9.3 | ______|_____ | | |

| | | | | |

| | | | | |

5.3 | | ___|__ ___|__ | |

| | | | | | | |

4.3 | ___|__ | | | | | |

| | | | | | | | |

___|_____|_____|_____|_____|_____|_____|_____|__

G H B D E C A F

Figure 3. Cluster Dendrogram (not to scale)

If the imposed metric really captures the legal knowledge of the expert, then these groups should have some significant legal meaning. We note that the first two splits isolate only single cases, namely, Armory v Delamirie in the first instance and Moffatt v Kazana in the second. This, surprisingly perhaps, is a good sign, for the purely legal analysis of the cases has suggested that these two cases do not really belong with the rest.

There are no other significant clusters formed until the remaining batch of six cases split into three groups at about the same level. The three groups make legal sense. The first two case are essentially the prototype finders case, in the second two it was relevant that the finder was an employee of the other claimant and in the third pair the dispute was decided, at least in part, on the basis of the terms of a lease between the two parties.

Nearest Neighbour Diagrams

When one of the sets of vertices consists of a single point, the fundamental property of MSTs may be rephrased: the edge which connects the vertex to the rest of the collection must be that which connects to the vertex’s "nearest neighbour".

This provides an interesting alternative sequence for constructing the MSTs. Begin with an arbitrary node and connect it with its nearest neighbour. Connect that new node with its nearest neighbour if not already added to the collection. When a dead end is reached, choose arbitrarily a node which is not yet in the diagram and repeat the process.

The results of this process do not suffice to form the complete spanning tree but rather form connected sets of vertices. If desired, these connected sets may be themselves connected using the above fundamental property of MSTs, but it is more interesting to consider the connected sets in isolation. If the knowledge of the expert has been properly captured, then it is hoped that each case is decided in the same way as its nearest neighbour. If the knowledge base is "perfect" then all of the cases in each connected subset should be decided in the same way.

The nearest neighbour diagrams are shown in Figure 4. A small "x" marks those links where the outcome of the connected cases is not what is expected. We note that there is only one "error" in the diagrams.

x

A——–>B<——–>D<——F

C<——–>E

G<——–>H

Figure 4. Nearest Neighbour Diagram

ID3 Algorithms

A third way of testing the representation of the knowledge encoded as vectors is to use an inductive generator similar to Quinlan’s ID3. As mentioned above, these algorithms are not usually suitable for providing an inference mechanism because of the general paucity of data.

For example, in the finders cases the original research was conducted with only eight cases. It might have been possible to find as many as 20 cases, but many of the cases are simply repetitious in terms of the factors which we have identified. Further, the cases used here are the those which are discussed in the influential texts as defining the law relating to finders. In other words, these are the ones which are generally used by human experts to find the law. It would be nice if our systems could do as well with the limited information. If so, it may or may not be desirable to add other cases later.

But ID3 type algorithms can be used to help analyse the efficacy of our representation. With 8 cases and only binary branching, there is the theoretical possibility of generating decision trees with eight separate terminus nodes or a total of 15 nodes. If the representation really contains knowledge relevant to the decisions of the cases, one would expect the ID3 algorithm to generate a much more compact tree.

In fact, the algorithm applied to the data given in Figure 3 produces a tree with precisely five nodes, only two more than the theoretical minimum. Again, we note that the decision tree has no direct use for us in an expert system, since it is easy for any lawyer to see that it does not contain knowledge sufficient to deal with all finders cases.

A different way of analysing the knowledge content of the vector representation of the finders cases is to consider the clustering of attributes. In this approach, we consider the similarity between attributes, form the appropriate MST and consider repeated single linkage clusters.

We weighted the distances between cases because we thought that such a process would reflect the implicit importance attached to the choice of attributes by the domain expert. Although the cases chosen might have gone through a selection process, it is less clear that the selection has been on any meaningful basis, so that a straightforward Euclidean metric will probably be sufficient.

The theory here is that the attributes should not form any stable clusters of any size, for that would indicate that it might be more sensible to amalgamate the stable cluster into a single newly defined attribute. The clustering tree for the attributes does not exhibit any particularly stable cluster and we conclude that, although far from perfect, the attributes chosen do not involve any undue redundancies.

Ranking tests against human experts

A final test is to compare the knowledge base against that of a human expert. One of the ways in which this can be done is to ask the human expert to rank pairs of cases according to his or her concept of "similarity". The ranking assigned by the expert may be compared with that generated by the distance measure. If the knowledge base has succeeded in capturing the expertise of the domain expert, then we should expect a good agreement in the rankings.

Such an experiment was performed on the FINDER knowledge base using law students as domain experts. The students had no contact with the formulation of the vector representation. The results were measured using Spearman Rank correlation coefficients (Kendall, 1955). Of the four subjects who participated, the lowest correlation with the metric ranking was .63, the highest .88. Among the human subjects, the lowest correlation was .65, the highest .90.

One of the outcomes of this experiment was that the subjects themselves showed considerable agreement, thus providing the first known empirical evidence that there really is a common notion of similarity held by lawyers. When measured using Kendall’s coefficient of concordance, the agreement measured .83 (Kendall, 1955; see also Tyree, 1977). This figure, and the Spearman coefficients mentioned in the last paragraph are all statistically significant at the .01 level.

Of course, this agreement with the human domain experts merely shows that the knowledge in the database captures some of the information used by the experts. It does not indicate any particular way of reasoning with the knowledge or show that any particular form will reproduce the way in which the expert reasons with the knowledge.

5. Constructing a Legal Advisor

Perhaps the most convincing way to test the adequacy of the knowledge base is to use it together with an inferencing procedure to solve real problems and to test the solutions provided by the system against those of human experts.

The knowledge base constructed in accordance with the preceding ideas may be used to construct an advisor on those areas of law where recourse must be had to previously decided cases. In the current DataLex implementation, named FINDER, the advisor solicits information from a "client" who has a finder-type problem. The system is encoded as an application of the PANNDA module of the DataLex legal expert system shell LES. PANNDA is an acronym for "Precedent Analysis by Nearest Neighbour Discriminant Analysis".

The construction of the advisor is actually in two distinct parts. In the first part, the "closest" case in the knowledge base is selected and the predicted outcome of the "client" case is predicted to be the same. The knowledge base is then split into two parts, those cases with the same outcome as that just predicted and those with the opposite outcome. The case nearest to the "client" case but subject to the constraint that it has a different outcome is selected. This case, referred to as the "nearest other" in the program, is the case which provides the strongest argument against the predicted outcome.

In the second part of the advisor, a report is generated which is similar to a barrister’s opinion in that it provides arguments in favour of the predicted outcome and distinguishes the "nearest other" case. A sample report from the current program is attached to this paper.

5.1 Nearest Neighbour Prediction

Nearest neighbour prediction is obviously not a procedure which is directly modelled on the way that a human lawyer would solve the problem, so some may not classify it as an "artificial intelligence" technique. Our view is that we do not know how a lawyer really deals with the precedent problem, so that any method which produces results which are in accordance with the results reached by a human expert must be a contender until a better procedure is found or until we learn more about the way the human expert really functions.

Of the various statistical techniques which might be used to predict the outcome, the nearest neighbour procedure has some characteristics which make it attractive in the case law context. It is known to be a stable and relatively powerful technique for a wide variety of underlying statistical distributions (Cover, 1967). Since we have no sensible idea of the underlying distribution of the knowledge base, this theoretical stability of the procedure is reassuring.

5.2 Safeguards against blunders

However, since the method does not directly model any known human intellectual process, it seems wise to build in some cross checks to avoid giving advice in situations which are "marginal" in some sense. There are three possibilities for guarding against blunders, although we have only implemented the first in the existing programs.

Test against centroids

The first check, and the only one which has been implemented, merely uses an alternate method of classification. If the cases are split into the two categories according to outcome, then we may compute the centroids of the two groups. A different, and less efficient, method of assigning the client case to a group is to assign it to the group whose centroid is nearest.

If this procedure results in a different outcome from the nearest neighbour classification, then it must be because the client case is, in some sense, near the common boundary of the two groups and so must be considered as a "difficult" case.

At the very least the user of the system should be warned that the consultation has caused the program difficulties. Since our philosophy has called for designing systems which will be used by lawyers, it seems appropriate that the user may seek the system’s advice after being warned that the advice is dubious. If the system were designed for use by non-lawyers, we would probably not offer the option.

Test against ideal points

A second method of testing for difficult cases raises some interesting possibilities for the systems to be used in areas other than the legal problem of precedent. The method depends upon the simple observation that there is no technical reason that the "cases" need to be real cases.

The idea here is to introduce "ideal points" in the knowledge base. These would be two imaginary cases which are the strongest possible for the finder of the chattel and for the other claimant respectively. Again, a test against these ideal points which produced results different from the standard nearest neighbour inferencing mechanism would produce a warning to the user that the client case is unusual in some way.

This idea of using ideal points is not new, of course, since it has often been mentioned that the examples used in the ID3 algorithm need not be real examples but could be ones invented by the domain expert.

Test against max in MST

The third test, like the centroid test, is against the structure of the knowledge base itself. Recall that once the nearest neighbour is identified, the link between the client case and the nearest neighbour must be a part of any MST of the complete graph which includes the client case.

Instead of considering all of the cases, consider only those cases in the group to which the client case has been assigned. Form a MST and consider the longest link in the tree. If that link is one which connects to the client case, there is cause for concern, for it means that the client case would split away from the rest of the cases at the first stage of a cluster analysis.

5.3 The Need for Justification

At one time it was very fashionable to use statistical methods to predict the outcome of court cases, particularly the cases which were before the United States Supreme Court (Kort, 1956; Nagel, 1965; Haar, 1977; Tyree, 1981). The activity attracted a great deal of hostility from traditional lawyers who argued that it was a useless activity at best and subversive of the legal system at worst (Weiner, 1962).

Whatever one’s views on the matter, it is clear that a mere statistical prediction is almost useless for the practicing lawyer. This is for two reasons. The first is common to all advice giving programs, namely, that the advice is suspect without some justifying explanation. In this situation, the justification provided is used merely to make the advice seem more credible.

The second reason may be unique to the legal domain: the justification is precisely the advice which is being sought! The arguments for and against are more important to the practitioner than the predicted outcome for without these arguments the case may not even be pursued. We believe that any legal reasoning system must provide detailed arguments to the user of the system if it is to have any use whatsoever.

5.4 Constructing a supporting argument

Constructing the supporting arguments in the current PANNDA module of the LES system follows a fairly rigid format. The outcome of the case is predicted, a brief summary of the supporting case is given and the most important, ie, heavily weighted, reasons for the found similarity are reported to the user.

If there are some differences between the client case and the nearest neighbour, then the "nearest other" case is summarised and then distinguished by identifying the most important differences between it and the client case.

The procedure is simple, but produces remarkably sophisticated looking opinions. The attached report is an analysis of Parker v British Airways which was decided by the English Court of Appeals. The Court of Appeal came to the same decision, relying on the older case of Bridges v Hawkesworth, identified by the PANNDA module as the nearest neighbour. The Court of Appeal gave careful consideration to the argument that Bridges should be overruled, a possibility not foreseen by our model.

6. Future Development

The main thrust of current work on the PANNDA approach to case law is to develop more sophisticated arguments and justifications. This can be achieved to some extent by selective perturbations of attribute values in the knowledge base, although significant improvements will probably require a more sophisticated knowledge representation scheme.

The use of MSTs in evaluating the knowledge base suggests that an analysis of the trees should assist in building more sophisticated and complex arguments, but the details of how this might be done have not been worked out.

The way in which the PANNDA module is used is also under development. We believe that the user should be able to have more influence on the purpose of the advice. As an example, the user of the FINDER system who was preparing the Parker case should be able to use the system to assist in constructing arguments concerning the correctness or otherwise of Bridges.

7. Conclusions

We believe that the statistical approach to legal reasoning about precedent has proved itself to be useful by the performance of the PANNDA module in the LES expert system shell. One of the main objections to the statistical reasoning has always been that it does not provide what the lawyer needs most, namely the arguments, but PANNDA has already shown that good justifications may be constructed.

There are philosophical objections to a PANNDA type approach since there is no attempt to copy the methods used by human experts to deal with the problem. It is sometimes said that FINDER appears to be more intelligent than it is.

Such criticisms are hard to counter. We agree that the PANNDA approach is not "human", but since we do not know what the human approach really is, the alternative is to refrain from building case law systems at all. We believe that the justifications of the final report should be adequate to allow the user to determine if the advice is good or bad, so that it simply does not matter how the report was constructed.

Perhaps the best justification for interest in the PANNDA approach is that by studying the differences between the advice given by the program and the advice generated by humans, it will be possible to shed some light on the way in which lawyers really do reason about case law. If that happens, then PANNDA and its derivatives will have served a very useful service.

Acknowledgements

The authors gratefully acknowledge the assistance of the Australian Research Grants Scheme, the Law Foundation of New South Wales, and the Faculties of Law of the New South Wales Institute of Technology, the University of New South Wales and the University of Sydney.

References

Cover and Hart (1967). Nearest Neighbour Pattern Classification, IEEE Trans Inform Theory Vol IT-13, pp 21-27, January, 1967.

Cross (1980). Precedent in English Law, 2nd Australian ed, Sydney.

Goodhart (1931). Essays in Jurisprudence and the Common Law, Cambridge.

Goodhart (1959). The Ratio Decidendi of a Case, 22 Modern Law Review 117.

Haar, Sawyer and Cummings (1977). Computer Power and Legal Reasoning: A Case Study of Judicial Decision Prediction in Zoning Amendment Cases, [1977] ABF Res J 651.

Kendall (1955). Rank Correlation Methods, 2nd ed, London.

Kendall and Stuart (1958). The Advanced Theory of Statistics, London, Vol 3, p 338.

Kort (1956). Predicting Supreme Court Decisions Mathematically: a quantitative analysis of the ’Right to Counsel Cases’, 51 Am Pol Sci Rev 1

Nagel (1965). Predicting Court Cases Quantitatively, 63 Mich L Rev 1411.

Quinlan et al (1986). Inductive Knowledge Acquisition: A Case Study, in Proceedings of the Second Australian Conference on Applications of Expert Systems, NSW Institute of Technology.

Sedgewick (1983). Algorithms, Addison-Wesley, Chapter 31.

Stone (1985). Precedent and Law, Butterworths, Sydney.

Stone (1959). The Ratio of the Ratio Decidendi 22 Modern Law Review 597.

Tyree (1977). The Geometry of Case Law, (1977) 4 Vict U L Rev 403.

Tyree (1981). Fact Content Analysis of Case Law, Methods and Limitations, 22 Jurimetrics Journal 1.

Weiner (1962). Decision Prediction by Computers: Nonsense Cubed - and Worse 48 ABAJ 1023.

Zahn (1971). Graph Theoretic Methods for Detecting and Describing Gestalt Clusters, IEEE Trans on Computers Vol C-20, No 1, pp68-86, January, 1971.

Notes

Appendix I

Sample FINDER Session

The following is a consultation with FINDER using the facts of Parker v British Airways [1982] 1 All ER 834. Parker was a passenger awaiting a flight in the VIP lounge when he found a gold bracelet lying on the floor. He handed the bracelet to the British Airways "lost and found" with instructions to find the owner or to return the bracelet to him. British Airways failed to find the owner but sold the bracelet and claimed the right to retain the proceeds. British Airways was a lesee of the lounge.

START OF CONSULTATION

Finder Ver. 1.0

Copyright (C) Dr. A. Tyree 1985 University of Sydney

[C Langauge Version

Copyright (C) A.S. Mowbray U.N.S.W.]

Do you want instructions ? n

[Attribute information is then solicited from the end-user]

CONCLUSION

It would seem that the likely outcome would be the same as that in Bridges v. Hawkesworth, that is, the finder should win.

In Bridges v. Hawkesworth (1851) 21 LJQB 75 the plaintiff found a bundle of banknotes on the floor of the public area of a shop. He handed the notes to the shopkeeper in order that the true owner of the notes might be found. Although the owner never was found, the shopkeeper refused to return the notes to the finder.

The court found for the finder, holding that there is a "general right of [a] finder to any article which has been lost as against all the world except the true owner". It was further noted that the notes had never been in the custody of the defendant nor within the protection of his house as might be the case had they intentionally been deposited there.

There are many similarities: the finder is not the occupier, the chattel is not attached, there is a bailment of the chattel, and the chattel is not hidden.

Of course, the present case is not on all fours with Bridges v. Hawkesworth since in that case the non-finder was the owner of the real estate.

The opposite result was reached in Yorkwin v. Appleyard. In Corporation of London v. Appleyard [1963] 1 WLR 982 workmen employed by Wates Ltd were engaged in cutting a key-way into a celler wall for the purposes of securing a foundation when they found an old wall-safe built into a recess of the old wall. Inside was a wooden box which contained a large number of Bank of England notes.

The notes were handed over to the City of London police who sought interpleader proceedings to determine who was entitled to the possession of the notes.

Wates Ltd were an independent contractor engaged by Yorkwin Ltd for a construction project. Yorkwin were lessees in possession of the property which was owned in fee simple by the Corporation of London.

The Court followed the decision in South Staffordshire Water Co v. Sharman [1896] 2 QB 44 in holding that the owner of land is, in the absence of a better title elsewhere, entitled to the possession of objects which are attached to or under the land. Consequently, since the notes were in a wooden box within a safe built into the wall of the old building, the safe formed part of the demised premises.

Yorkwin, being in lawful possession of the premises, were in de facto possession of the safe, even though ignorant of its existence. Although Yorkwin was entitled to possession as against the finders, they in turn were displaced by the Corporation of London which relied successfully on a term in the lease which granted them the right to certain objects found on the premises.

But Yorkwin v. Appleyard has some significant differences: the chattel was attached, there was not a bailment of the chattel, there was a master/servant relationship between the parties, and the chattel was hidden.

Consequently, it would appear that there is nothing in Yorkwin v. Appleyard to warrant any change in the conclusions made.