Fred Keller studies intellectual property
Alan L Tyree

Abstract

The Keller Plan is an alternative teaching method which allows students to proceed at their own pace. Educational research has shown that Keller Plan students learn more (as measured by final examinations) and have better retention of what is learned (as measured by examinations administered after a lapse of time).

We describe a Keller Plan course in Intellectual Property Law which uses standard textbooks and computer testing.

Why look at alternative teaching methods?

Dubin & Taveggia 1968

In 1968, Dubin & Taveggia published the results of a study which made a detailed examination of 50 years of research into teaching methods. Methods studied and compared included lectures, various forms of discussion methods, lectures plus tutorials and various forms of self-study. Their conclusions are disturbing:

"[I]n the foregoing paragraphs we have reported the results of a re-analysis of the data from 91 comparative studies of college teaching technologies conducted between 1924 and 1965. These data demonstrate clearly and unequivocally that there is no measurable difference among truly distinctive methods of college instruction when evaluated by student performance on final examinations." Dubin, R and Taveggia, T, "The Teaching-Learning Paradox", Center for the Advanced Study of Educational Administration, University of Oregon, 1968, at p35.

Is Law Teaching Different?

Many law teachers seem to believe that the subject matter of law is so different from other subjects that the results of the Dubin and Taveggia study do not apply to the teaching of law. There is little experimental evidence, but what there is gives little comfort to this parochial attitude: Teich, P "Research on American Law Teaching: Is there a case against the case system?" 35 J Legal Education 167 (1986)

Mastery Learning Models

At about the time when the Dubin and Taveggia study appeared, a number of research papers began appearing which identified teaching methods which did consistently result in an improvement in student performance on final examinations. The methods are known collectively as "mastery learning" models. The salient characteristics of the method are that the students are given very precise information on what they are expected to learn and they are tested regularly to ascertain if they have in fact met the stated objectives.

Bloom and his associates at the University of Chicago have identified a number of factors which may be manipulated in various teaching models and have measured the "effect size" of successful manipulation of the variables. The interested reader should consult Bloom, BS, "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring" [1984] Educational Researcher 4-16.

What is the Keller Plan?

The Keller Plan is a mastery learning method which is based on the principles of reinforcement learning theory. Its key features are self-pacing, modularity, a requirement for unit perfection (mastery) and rewarding success in each unit.

Self-Pacing

If every student is to achieve "mastery" then it is unlikely that they will all achieve it within the same time. Students will differ in ability and/or in study habits. Outside demands will mean that a particular student may require more time for one topic than another.

The Keller Plan requires that the amount learned by each student is fixed and that time is treated as a variable. This contrasts with standard teaching methods where the teacher sets the pace. A certain amount of time is spent on each topic and although the student may return to the topic for revision or for further study, the teacher moves on to the next topic at a fixed schedule. In this model, the time spent on a topic is fixed, and the amount of material learned by the individual student is variable.

One of the consequences of the self-pacing requirement is that most Keller Plan courses use written materials as the primary means of teacher-student communication. This is not a necessary feature of the Plan as any method of communication would suffice which allowed the student to proceed as and when he or she is prepared. Recorded lectures or videotapes could be used. However, since lawyers must spend a lifetime learning and since most of that learning will be from written materials, the use of written materials in a law course has the benificial side-effect of developing the students' reading abilities.

Modularity

It is easier to learn material if that material is split into relatively small parcels. Keller Plan courses are divided into "units" or "modules". Guidelines for the construction of Keller Plan courses suggest a minimum of one module per week of instruction. Modules should be of approximately equal sizes and should represent an amount of work which may be conveniently tested in a module test of 30 - 40 minutes duration.

The Module Perfection Requirement

Students must demonstrate mastery of a module before progressing to the next. Mastery is demonstrated by passing the module test at a high level of proficiency. In our implementations, "high level" has usually meant 90%. Keller indicated that he expected 100%, ameliorated only be a residual discretion given to proctors who could rule on doubtful answers.

Rewards for success

Success is rewarded in the Keller Plan. In most implementations, the reward is a fixed number of marks which are credited toward the final course grade. In our implementations of the Keller Plan, we award a total of 60% of the course mark to module examinations, the remaining 40% being awarded by a traditional final examination.

Keller was opposed to any direct penalty for failure, considering that it took the pleasure out of the learning experience. There is an indirect penalty since the student may not retake a module test immediately. In most Keller Plan courses, this delay is implemented by a rule which limits students to one test in any single day.

Computer testing

A Keller Plan course is very test-intensive. Students require an average of between one and a half and two attempts at each module test. This has two consequences for anyone who would implement a Keller Plan course. First, since the student may require several attempts at passing a module exam, it is necessary to develop several alternative versions of each test. Keller and his associates recommended four versions of each test.

Secondly, each of these tests must be marked. Keller used "proctors", often students themselves, to mark the module tests. Keller cautioned that these "proctors" were not to consider themselves as tutors or instructors, but acknowledged that some tutorial effect would take place during the marking of the module test.

We have implemented the Keller Plan by replacing human proctors with computers. In order to understand the details of the computer procedures, we must make a short digression into the theory of testing.

Types of testing

Educational theorists classify testing procedures in several different ways which reflect their general purpose.

Formative and Summative assessment

"Formative assessment" is assessment which is used for the sole purpose of assisting the student to determine areas of weakness. It is not part of the final assessment mark. The theory is that students lack the critical ability to assess their own knowledge. See Boud and Tyree "Self and Peer Assessment in Education: a preliminary study in Law" (1980) 15 Journal of the Society of Public Teachers of Law 65-74; Rawson and Tyree, "Self and Peer Assessment in Legal Education", (1989) 1 Legal Education Review 135

"Summative assessment" is the assessment at the end of the course, or at the end of a segment of the course, which is intended to be a measure of performance. It is the assessment which is used to determine the final course mark.

Norm referenced and criterion referenced assessment

Norm referenced testing is designed to discriminate between students. Most examinations in higher education are norm referenced.

Criterion referenced testing is designed to check that a student has obtained a certain level of development in an area of knowledge. Criterion referenced testing is most suitable for formative testing.

See Heywood, J Assessment in Higher Education, 2nd Edition, Wiley, New York, 1989.

Types of questions

We have used three different types of questions in our experimentation with computer testing. An obvious candidate for computer testing is multiple choice questions. A refinement of these is what we call a "tree" question. In the Intellectual Property course, we have moved exclusively to short answer questions. We will discuss each of these briefly.

Multiple Choice

Most people presume that multiple choice questions (MCQs) are the only type that a computer can handle. Perhaps for this reason, there has been little interest in computer testing among law teachers.

It seems that most law teachers believe that MCQs are incapable of serious testing in law because "law is not black and white". However, most of the educational literature on testing emphasises that MCQs can be used as a substitute for any other kind of test questions: see Heywood, J Assessment in Higher Education, 2nd Edition, Wiley, New York, 1989; Ebel RL and Frisbie, DA Essentials of Educational Measurement, 4th Edition, Prentice-Hall, Englewood Cliffs, 1986. We will not debate the point further here except to say that we believe that the educational testing experts are right.

Our experience has led us to reject MCQs as the main form of testing for other reasons. First, "tree" questions are a cheaper substitute (in terms of construciton time) when the testing is done by computer. Secondly, we are interested primarily in Keller Plan courses where the students have an unlimited number of attempts at each examination. In such a circumstance, there must be a large number of questions to avoid the "arcade effect", the situation where students have seen, or believe that they have seen, the question before and so respond to the question with little involvement in the process.

"Tree" Questions

Tree questions are simply multiple level MCQs, made possible by the branching capabilities of computers. They are much easier to write since it is not necessary to present all of the "distractors" in one place and since it is possible to tailor the lower levels of the question in response to the student's answer.

A simple example of a tree question: in Technology Law we wish to test if the student has understood the basic concept of the "breeder" in the Plant Varieties Act 1987 (Cth). We present a fact situation in which the leadin character develops a new cultivar. It is not clear from the fact situation if the development is or is not in the course of his employment. There is no "right" answer. The "tree" looks like this:

Who is the breeder?

Jones

Course of employment?

Yes - Fail: inconsistent answers

No - Pass

University

Course of employment?

Yes - Pass

No - Fail: inconsistent answers

The same issue could probably be tested with a MCQ, but it would be much more difficult to write so that the answer was not obvious.

Short Answer

When the student may be tested several times on the same subject matter, as in the Keller Plan course, we found that tree questions were still subject to the "arcade effect". For this reason, we wished to move to short answer questions. "Short answer" for this purpose is a question which can be answered in about 10 lines.

We theorised that a test composed of short answer questions would require a more substantive involvement by students. Even if there was substantial repetition the student could not get through "by accident".

The examination system for Intellectual Property 1992 is composed entirely of short answer questions. We have found that our hopes were justified, and that the system functions both as a good examination system and as a powerful tutorial system. A welcome side benefit of using short answer questions is that these are very much easier for us to construct.

The problem with short answer questions is to devise a method of computer marking. We discuss this below.

The software

LES - tree questions

Tree questions are easily implemented using LES, the program developed by Andrew Mowbray as part of the DataLex Project. LES provides complete text handling and record keeping capabilities. We will not discuss either LES or tree questions further here since we find short answer questions to be more useful in Keller Plan courses.

CRES - short answer questions

As mentioned above, the main problem with the computer administration of short answer questions is the problem of marking. We arrived at a simple but effective solution: force the students to mark their own questions! This is done by means of a number of "critical review" questions which are themselves multiple choice questions.

In practice, the student is presented with a question which demands a short answer. After writing the answer, the student "submits" the answer. The answer is then displayed at the top of the screen and the first of a series of questions about the answer is displayed below. The precise series of questions will depend upon the student answers. In other words, the "critical review" questions are the components of a tree question, but the important difference is that the focus of the tree question is on the characteristics of the student answer rather than an attempt to test the substantive material directly.

All of this is more complicated to describe than it is in practice. The "critical review" questions are similar to the notes that one would give to any marker. An example of a CRES question is given in Appendix B.

SAGES - Automatic marking of short answer questions

CRES questions pose no difficulty if used only in formative testing. If they are used in summative assessment then we must face the fact that students are not always honest when their own self interest is at stake. The "correct" answer to the critical review questions is usually self-evident, so that the temptation to give that answer is strong.

SAGES is an artificial intelligence program which uses a combination of parsing and statistical techniques to mark short answer questions. We use it as a "watchdog" to assist the students to remember their obligations. Students know that SAGES marks every one of their answers. If the result of the SAGES mark and the student's CRES mark differ, then an automatic message is sent to the teacher who then reviews the answer personally.

The Intellectual Property Implementation

Intellectual Property was offered as a Keller Plan course for the first time in Semester 1, 1992. It is the third stream of IP to be offered at the University of Sydney. One other stream was offered in the first semester. Enrollment in the Keller Plan course was 40, in the "normal" stream about 60.

Materials

The instructional materials consist of two textbooks and a series of "study guides". The textbooks are McKeough and Stewart, "Intellectual Property in Australia", Butterworths, 1991, and Blakeney and McKeough "Intellectual Property Commentary and Materials", 2nd ed, Law Book Co,

Each module is defined by its study guide. The study guide contains a statement of behavioural objectives for the module. These objectives identify as precisely as possible what it is that the teacher expects the student to be able to do when the module is successfully mastered.

The second part of the study guide identifies the pages of each text that the student should read. This is followed by information which should assist the student in reading the texts. This information may include some additional text if the teacher thinks that the textbooks are not complete on a particular point. There will be information on additional developments in the law which have occurred since publication. This is also a place where the teacher may add their own views on the subject matter, emphasising or disagreeing with particular parts of the texts.

The Module Structure

Intellectual Property 1992 contains 20 modules. Eighteen of the modules are readings from the two texbooks. Modules 19 and 20 rely on extraneous material, including the decision of the High Court in Autodesk Inc v Dyason. Modules average 50 pages of reading from the prescribed texts.

The Module Exam structure

Most modules have between five and seven behavioural objectives. The tests reflect this structure, and the typical test has seven questions. Student time on each question averages between five and ten minutes, and the pass rates for the individual questions range between 100% and 40%. In the view of the teachers, a question probably requires attention if its pass rate is below 75%, since either the objective being tested is unrealistic or, more likely, the question contains ambiguities or simply fails to test what the teacher thought was being tested.

In order to allow for the re-testing which is inherent in the Keller Plan course, each question is "doubled", that is, there are two questions which are related in that they are attempting to test the same point. The actual question presented to the student is selected at random from the "pool" of two questions. This allows the construction of 128 distinctive module examinations for each module, although many of these examinations have a number of common questions.

There are appoximately 275 questions in the existing system. With their associated model answers, tutorial feedback and review questions, the system contains approximately 80,000 words. Doubling each question seems to have been adequate, although we will add questions to the system next year with an aim of having three questions on each examination point.

Acknowledgements

The Keller Plan Project and the development of the CRES and SAGES software have been supported by generous grants from the Law Foundation of New South Wales. The Keller Plan Project is managed by Alan Tyree and Shirley Rawson. Chris Hutchinson is responsible for the programming.

Appendices

A Sample CRES question

e12q1.1 [government information]

Cynthia is a journalist with the Sydney Trumpet. She has a "source" within the Commonwealth Department of Defence which "leaks" information to her concerning a secret base that was built in Australia at the time of the Viet Nam war. There is little doubt that her source has breached a duty of confidence, but Cynthia and the Trumpet plan to publish the information. The Minister seeks your advice as to the possibility of restraining publication through an action for breach of confidence.

e12a1.1

Cynthia and the Trumpet will be bound by a duty of confidence from the time that they are aware that the "source" was in breach of confidence by passing the information to them. However, in order to restrain publication, it must be shown that it is in the public interest to treat it as confidential: Attorney-General (UK) v Heinemann Publishers Australia Pty Ltd (The Spycatcher case). It is not enough that the government merely finds the information embarrassing.

e12t1.1

Review the "additional" requirement for restraint when the plaintiff is the government, as explained in Attorney-General (UK) v Heinemann Publishers Australia Pty Ltd (The Spycatcher case)

rev1

In your opinion, will the Minister succeed in restraining publication?

"y" rev2

"n" rev3

rev2

In order to restrain publication, it must be shown that it is in public interest to treat the information as confidential. Has your answer referred to this point?

"y" rev4

"n" fail

rev3

The Minister will fail unless it can be shown that it is in public interest to treat the information as confidential. Has your answer referred to this point?

"y" rev4

"n" fail

rev4

You should have justified your arguments by reference to Attorney-General (UK) v Heinemann Publishers Australia Pty Ltd (The Spycatcher case) or to some similar case. Have you done so?

"y" pass

"n" fail

SAGES performance

SAGES referred about 22% of all answers to the teacher. A detailed analysis is not yet available, but the impression of the teacher is that when a student marked "pass" and SAGES marked "fail", it was usually SAGES which was at fault. When the student marked "fail" and SAGES marked "pass", SAGES was usually correct and the problem could be traced to unduly strict Critical Review Questions.

The pressures of time meant that it was not possible to add as many models to the SAGES system as originally hoped. In those cases where it was possible to add models, it seems as though SAGES performed better. In other words, SAGES gets "smarter" as it learns more.

SAGES performed its "watchdog" function admirably. The teacher took every opportunity to show the students that SAGES was working. The result was that not a single instance of "cheating" on the Critical Review Questions was found although the system administered nearly 8000 questions during the course.

Date: 2013-01-12 Sat

Author: Alan L Tyree

Created: 2023-12-01 Fri 07:57

Validate