Content area
This paper experimentally shows that mixed-initiative dialogue is not always more efficient than non-mixed initiative dialogue in route finding tasks. Based on the dialogue model proposed in Conversation Analysis and Discourse Analysis a la the Birmingham school and Whittaker and Stenton's definition of initiative, we implement dialogue systems and obtain experimental results by making the systems interact with each other. Across a variety of instantiations of the dialogue model, the results show that with easy problems, the efficiency of mixed-initiative dialogue is a little better than or equal to that of non-mixed-initiative dialogue, while with difficult problems mixed-initiative dialogue is less efficient than non-mixed- initiative dialogue. [PUBLICATION ABSTRACT]
*u*n*s*t*u*r*t*u*r*e**t*e*x*t*
User Modeling and User-Adapted Interaction 9: 79-91, 1999. (C) 1999 Kluwer Academic Publishers. Printed in the Netherlands. 79
Exploring Mixed-Initiative Dialogue Using Computer Dialogue Simulation
MASATO ISHIZAKI1, MATTHEW CROCKER2 and CHRIS MELLISH31
Japan Advanced Institute of Science and Technology, Tatsunokuchi, Nomi, Ishikawa, 923-1292 Japan2
Centre for Cognitive Science, Univ. of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, UK3
Dept. of Artificial Intelligence, Univ. of Edinburgh, 80 Southbridge, Edinburgh EH1 1HN, UK
(Received: 1 December 1997; accepted: 10 July 1998) Abstract. This paper experimentally shows that mixed-initiative dialogue is not always more efficient than non-mixed initiative dialogue in route finding tasks. Based on the dialogue model proposed in Conversation Analysis and Discourse Analysis a la' the Birmingham school and Whittaker and Stenton's definition of initiative, we implement dialogue systems and obtain experimental results by making the systems interact with each other. Across a variety of instantiations of the dialogue model, the results show that with easy problems, the efficiency of mixed-initiative dialogue is a little better than or equal to that of non-mixed-initiative dialogue, while with difficult problems mixed-initiative dialogue is less efficient than non-mixed-initiative dialogue.
Key words: mixed-initiative dialogue, computer dialogue simulation, efficiency of dialogue, discourse analysis, task-oriented dialogue.
1. Introduction In keyboard human-computer dialogue research, user input has been used to trackknowledge states. For example, when a navigation system sometimes give directions using a landmark unfamiliar to the user, the user will interrupt to requestinformation from the system. The necessity of so-called subdialogues has been recognised and some mechanisms have been proposed to handle such interrup-tions. Systems that allow user interruptions were called mixed initiative dialogue systems. In these systems, the task domains are ones in which the roles of con-versational participants are relatively fixed, and the initiative changes in limited situations. Recently, there has been increasing interest in collaboration with com-puter systems having incomplete knowledge. One reason for this is that humans do not have complete knowledge and they make the best use of interaction with othersand the environment. The other reason concerns robustness of systems, which can work not only in a toy world but also in a real world. In this area, autonomous pro-grams called `agents', which enable users to effectively use computers for solving
186180.tex; 2/08/1995; 8:41; p.1 corrected JEFF INTERPRINT user170 (userkap:mathfam) v.1.15
80 MASATO ISHIZAKI ET AL. problems, have been extensively studied. How agents cooperate to solve problemsin a distributed environment is one of the most important research topics. The
situations where agents with incomplete knowledge should take the initiative arenot limited to cases of interruption.
Why does user modelling research concern `initiative'? One of the aims ofuser modelling research is to realise more friendly and efficient use of computer systems by taking knowledge states of users into account. In some expert systems,for example, user models can be used to change explanations of their output according to the user's knowledge level (Paris, 1993). If the system can reason thatthe user does not know some of the terms in the explanation, the system adds some sentences or a paragraph to elucidate the unknown terms to improve the original'sunderstandability (Cawsey, 1993; Moore, 1994). If there is a chance a user might draw the wrong conclusion from the explanation, the system tries to choose expres-sions that will prevent this. Even if the user does not make a request explicitly, the system can provide useful information by reasoning based on the user model. Allenet al. (1996) proposed and incorporated a mixed initiative planning model into their spoken human-computer dialogue systems. In their model, a system makes a planbased on incomplete information, communicates it with the user, and examines if it can work. If the plan can work, the system proceeds to make the next plan; if not,the system re-plans using new information. When should the system provide information to the user or when should the system patiently wait for the user's input?`Initiative' can be a key concept to explore this question. Research on `initiative' is at the early stage, and thus this paper focuses on a basic question of whether mixedinitiative dialogue is always more efficient than non-mixed initiative dialogue using computer dialogue simulation.This paper is organised as follows. Firstly, previous studies on the initiative are reviewed. Secondly, experimental settings and programs are explained. Lastly,example outputs and results of the computer dialogue simulations of initiative are discussed.
2. Related Work Keyboard human-computer dialogue systems are normally developed in instruc-tion or tutoring and information providing domains. The term mixed-initiative
dialogue systems signifies those systems that can handle clarifications or interrup-tions. Because of the particular task domains they target, the roles of conversational participants are fixed and initiative changes are rather limited.Whittaker and Stenton (1988) defined initiative using a classification of utterance types, such as assertions, commands, questions and prompts. According totheir definition, a conversational participant has the initiative when she makes some utterance other than a response to her partner's utterance. The reasoning here is thata responsive utterance should be thought of as one elicited by the previous speaker rather than one directing a conversation in its own right. A participant does not
186180.tex; 2/08/1995; 8:41; p.2
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 81 have the initiative (or a partner has the initiative) when a prompt is used, since thisclearly abdicates the opportunity for expressing some propositional content. Whittaker and Stenton (1988) analysed software consulting dialogues between usersand consultants. They showed that the initiative changes after utterances with no propositional content like repetition, that cue phrases are not reliable predictorsof initiative change, and that in these dialogues, users tend to take the initiative in the first half, while consultants take it in the second half. Walker and Whit-taker (1990) analysed the relationship between the initiative and the distribution of anaphoric expressions using problem solving and advisory dialogues. They ob-served that in problem solving dialogues both conversational participants equally take the initiative, while in advisory dialogues experts, who give advice to novices,take the initiative in most cases. They showed that all anaphoric relations except demonstratives hold within a segment delimited by initiative change.Smith and Hipp (1994) built a spoken dialogue system for trouble shooting in electric circuits, and implemented a mechanism for gradually changing the ini-tiative from `directive' to `passive' between the system and the user by changing goal adoption strategies. They conducted system-evaluation experiments in whichsubjects actually conversed with computers using speech to fix problems in an electric circuit. With regard to initiative, they compared the `directive' mode inwhich the system always tries to achieve its own goal, with the `declarative' mode, in which the system tries to find and adopt a common (sub)goal with the user, andshowed that in the `declarative' mode the system can achieve a goal with fewer, but more effective, utterances than in the `directive' mode.Guinn (1998) conducted computer dialogue simulations of selecting an answer which has some specified characteristics, from a set of candidates. Determining acriminal from suspects or a faulty gate in an electric circuit are examples of this domain. Problems in this domain can be solved by examining candidates one byone if they have some specified characteristics, and the order of the examination affects the efficiency of the problem solving. He characterised this problem domainas collaborative search and conducted experiments in which domain knowledge is distributed over computer agents. He showed that agents can solve their problemswhen they share information with each other faster than when only one agent provides information, and that giving information on internal states can reducethe search space when agents cannot decide which candidate they will examine.
Chu-Carroll and Brown (1998) pointed out the necessity of making a distinctionbetween task initiative and dialogue initiative. They built a decision tree of predicting initiative change by applying ID4.5 machine learning algorithm to TRAINScorpora (Gross et al., 1995) annotated with dialogue structures and cue phrases. Their study focuses on response generation. Based on the distinction of the typesof initiative introduced by Chu-Carroll and Brown (1998), Guinn's study and our study - both use computer dialogue simulation - can be clearly differentiated. Thatis, Guinn's concerns task initiative and ours does dialogue initiative.
186180.tex; 2/08/1995; 8:41; p.3
82 MASATO ISHIZAKI ET AL.
Table I. I -R modelling of non-MID dialogues w/ and w/o clarifications
A: UI1A A: UI1A B: UR1B B: UCI1B A: UI2A A: UCR1A B: UR2B B: UR1B A: UI3A . . . B: UR3B
. . .
Research on initiative is at its early stage, and thus this paper intends to con-tribute to this research area by examining a basic question of whether mixed initiative dialogue is always more efficient than non-mixed initiative dialogue usingcomputer dialogue simulation.
3. Computer Dialogue Simulation 3.1. DIALOGUE MODELLING In Conversation Analysis (Schegloff and Sacks, 1974; Levinson, 1983) and Dis-course Analysis a la' the Birmingham school (Stubbs, 1983; Coulthard, 1985; Stenstro"m, 1994), dialogues are modelled by a basic unit consisting of initiating ut-terances (
I ) and responding utterances (R) (This is called an adjacency pair inconversation analysis.) The basic unit can include another unit like
I1I2R2R1, inwhich I and R with the same subscript form an adjacency pair. 1 Ahrenberg et al.(1995) used this modelling to build their spoken dialogue system.
Whittaker and Stenton (1988)'s definition can be re-stated based on this mod-elling. That is, a conversational participant takes the initiative when she initiates an utterance other than one responding to the other participant's initiating utterance.Dialogues in which one participant always takes the initiative with (in the righthand side) and without embedding (in the left-hand side) are shown in Table I. `U 'represents an utterance, and `
I ', `R' and `A', `B' indicate the roles and speakersof an utterance, respectively. The numbers attached shows the correspondence between initiating and responding utterances. Non-mixed initiative dialogue withoutembedding in the left-hand side of Table I shows that speaker A's first initiating utterance U I1A is responded by speaker B's utterance U R1B , while non-mixed initiativedialogue with embedding in the right-hand side represents that speaker A's first initiating utterance U I1A is interrupted by speaker B's clarification U CI1B responded by A's clarification answer U CR1A and is responded by speaker B's answer U R1B .
1 This is called insertion sequence in conversation analysis and embedding in discourse analysis.
186180.tex; 2/08/1995; 8:41; p.4
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 83
Table II. I -R modelling of MID dialogues w/ & w/o clarifications
A: UI1A A: UI1A B: UR1B UI2B B: UR1B UI2B A: UR2A UI3A A: UCI2A B: UCIB B: UCR2B
. . . A: UR2A
B: UI3B A: UR3A UI4A . . .
Mixed initiative dialogue can be modelled by allowing conversational agents torespond to each other's initiating utterance and initiate a new utterance. Table II shows I -R modelling of mixed initiative dialogue with and without embedding.Mixed initiative dialogue without embedding in the left-hand side of Table II shows that speaker A's first initiating utterance U I1A is responded by speaker B's answer U R1B while speaker B initiates a new utterance U I2B in the same turn. Mixed initiativedialogue with embedding in the right-hand side represents that in the third turn
speaker A initiates clarification utterance U CI2A , which is responded by speaker B's answer U CR2B . In the fifth turn, speaker A closes the clarification exchange, whichis followed by speaker B's new initiating utterance.
The I -R model abstracts utterances in two respects: the types of utterances,such as a request and a question, and the contents. In the following, we examine the efficiency of mixed- and non-mixed initiative dialogues by instantiating thesevariations of the
I -R modelling.
Systemmanager
Conversation program
Utterance parser
Utterancegenerator
Dialoguemanager Uttrs
Uttrs
Uttrs
Uttrs
Sem exprs Sem exprs Figure 1. A module diagram of the computer dialogue simulation programs.
186180.tex; 2/08/1995; 8:41; p.5
84 MASATO ISHIZAKI ET AL.
Node01 Node02 Node03
Node04 Node05 Node06 Node07 Node09 Node10 Node11 Node12 Node13 Node14 Node15 Node16 Node17 Node19 Node20 Node21
Node22 Node23 Node24 Node25 Node26 Node27 Node28 Node29 Node30 Node31 Node32 Node33 Node34
Node35 Node36 Node37 Node38 Node39 Node40
Node41 Node42 Node43 Node44 Node45
START
GOAL
Node01 Node02 Node03
Node06 Node07 Node09 Node10 Node11 Node12 Node13 Node14 Node15 Node16 Node17 Node19 Node20 Node21
Node22 Node23 Node24 Node25 Node26 Node27 Node28 Node29 Node30 Node31 Node32 Node33 Node34
Node35 Node36 Node37 Node38 Node39 Node40
Node41 Node42 Node43 Node44 Node45
START
GOAL Node18 Node08
Figure 2. Example maps for both speakers (A and B). 3.2. EXPERIMENTAL PROGRAMS Figure 1 shows the set-up for the computer dialogue simulation. The simulationprograms are a system management program and two conversation handling programs. The conversation programs exchange messages through the system man-agement program. The system management program can activate and de-activate the conversation programs. The conversation programs consist of an utteranceparser, a dialogue management program, and an utterance generator. The utterance parser analyses an input string from the system management program and outputsits semantic expression. The dialogue management program updates the system state based on the history of the system and the sematic expression created by theparser, and builds an output semantic expression using a given goal and the system state. The utterance generator makes a surface expression based on the semanticexpression.
We used the system management program made for DiaLeague, computer-computer dialogue system contest (Hasida and Den, 1997), the utterance parser and generator made for a Japanese-to-English machine translation system by ATRInterpreting Telecommunications Research Labs (Tashiro and Morimoto, 1995; Akamine et al., 1994). The first author built a dialogue management program using4000-line LISP code.
186180.tex; 2/08/1995; 8:41; p.6
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 85
Table III. The characteristics of maps used in computer dialogue simulation
# of # of # of nodes Length of Length of nodes links w/o names shortest route answer route
Maps 1 45, 45 56, 56 3, 3 12, 15 15 Maps 2 43, 43 54, 55 3, 3 7, 8 17 Maps 3 50, 50 58, 58 3, 3 12, 16 50
4. Experimental Results and Discussions 4.1. EXPERIMENTAL TASK In the route finding task, both conversational participants have train route mapsshowing connections among stations or nodes. These maps have the start and goal
stations; the map one agent has might be different from that of the other concerningstation names and connections between stations: The stations whose positions are the same on the two maps might have their names written on one map, but noton the other; the connection between two stations on one map might not be on the other. Under these circumstances, the conversational participants are asked tofind the shortest connected path from the start to goal stations, where the path must be connected on both maps. The route finding task can be thought of as amodified version of the Map Task (Anderson et al., 1991) for computer simulation on mixed initiative dialogues. The route finding can be simulated as graph searchand the roles of conversational participants are not fixed, since neither participant has dominant information or social status.Figure 2 shows example maps, in which nodes are are represented by rectangular boxes with or without node names and links are represented by black and graylines. Gray lines signify the shortest path for each map. Japanese place names used in the original maps are simplified to `Node' and sequential node numbers for im-proving understandability of the maps. In cases where a conversational participant has nodes without names, she is instructed to find them by obtaining informationfrom her partner. In the experiments, three maps created for DiaLeague (Hasida and Den, 1997) were used. The number of nodes, links, nodes whose names areunknown, the lengths of the shortest path, and lengths of the answer (common shortest) route are shown in Table III.Table III shows that
(1) maps are almost all the same size, (2) the number of nodes whose names are unknown is the same, and (3) the difficulty of solving a problem, which is estimated by the ratio of thelengths between a shortest route and answer route, is gradually increased from
map 1 to 3.
186180.tex; 2/08/1995; 8:41; p.7
86 MASATO ISHIZAKI ET AL.
Table IV. Dialogue simulation example of non-mixedinitiative dialogue
1 A I would like you to go from Node03 to Node06. 2 B (No.) I am not going from Node03 to Node06. 3 A I would like you to go from Node03 to Node02. 4 B (Ok.) I am going from Node03 to Node02. 5 A I would like you to go from Node02 to Node04.
Table V. Dialogue simulation example of non-mixed-ini- tiative dialogue
1 A I would like you to go from Node03 to Node06. 2 B (No.) I am not going from Node03 to Node03. 3 A I would like you to go from Node03 to Node02. 4 B (Ok.) I am going from Node03 to Node02.
I would like you to go from Node02 to the stop at the left of Node02. 5 A (Ok.) I am going from Node02 to Node04.
I would like you to go from Node04 to the stop above Node04.
4.2. SIMULATED DIALOGUE EXAMPLES The system can make variable dialogues with respect to initiative, utterance typesand the amount of contents in a turn. Here the amount of contents is measured
by the number of a unit path connecting adjacent nodes: a single content andmultiple contents signify one unit path and multiple unit paths, respectively. The actual system output for the example maps in Figure 2 are explained to contrastthe difference of one parameter while holding the others constant. The original Japanese output is translated into English.As a base example, Table IV shows non-mixed initiative, request-based, and single content dialogues. At the first and the second turns, speaker A proposes apath from Node03 to Node06, and speaker B rejects A's request, because there is no outgoing links from Node06 on B's map. At the third and the fourth turns,A proposes another candidate path from Node03 to Node02, and B accepts the proposed route, because A's and B's route candidates are the same up to Node08.At the fifth turn, A proposes a next path from Node02 to Node04.
Table V shows mixed initiative, request-based, and single content dialogues. Atthe first two turns are the same as non-mixed initiative, request-based and single content in a turn dialogue. However, at the third and fourth turns, A proposes apath from Node03 to Node02, which is accepted by B at the fourth turn. In the same turn, B takes the initiative by proposing a new path from Node02 to Node04.In this utterance, B describes Node04 as "the stop at the left of Node02", since there is no name on Node04 on A's map. At the fifth turn, A accepts B's proposal,
186180.tex; 2/08/1995; 8:41; p.8
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 87
Table VI. Dialogue simulation example of question-based dialogue
1 A Is Node03 connected to Node06? 2 B (Yes.) Node03 is connected to Node06. 3 A I would like you to go from Node03 to Node06. 4 B (No.) I am not going from Node03 to Node06. 5 A Is Node03 connected to Node02?
Table VII. Dialogue simulation example with multiple utterances in a turn
1 A I would like you to go from Node03 to Node06.
I would like you to go from Node06 to Node10. 2 B (No.) I am not going from Node03 to Node06. 3 A I would like you to go from Node03 to Node02.
I would like you to go from Node02 to Node04. 4 B (Ok.) I am going from Node03 to Node02.
(Ok.) I am going from Node02 to Node04.
in which A explicitly mentions Node04 so that B can learn it, and proposes a newpath from Node04 to "the stop above Node04".
Table VI shows mixed initiative, question-based, and single content dialogues.At the first turn, speaker A checks the connection between Node03 and Node06, and proposes this path after receiving B's positive answer about the connection.Upon receiving B's rejection of going from Node03 to Node06, A again checks the connection between Node03 to Node02. The difference between request- andquestion-based is that the former just proposes a path while the latter checks the connection before proposing one.Table VII shows mixed initiative, request-based and multiple contents dialogues. At the first and the second turns, speaker A consecutively proposes a path from
Table VIII. The experimental results for map 1
Spkr 2 NMID MID:RS MID:QS MID:RM MID:QM
RS 51, 780 31, 691 51, 1167 27, 889 33, 1327 Spkr1 QS 90, 1647 54, 1180 89, 1955 36, 1082 48, 1713
RM 27, 889 26, 890 32, 1015 29, 1023 35, 1469 QM 40, 1785 35, 1402 51, 1974 29, 1443 30, 1741
186180.tex; 2/08/1995; 8:41; p.9
88 MASATO ISHIZAKI ET AL.
Table IX. The experimental results for map 2
Spkr 2 NMID MID:RS MID:QS MID:RM MID:QM
RS 62, 1056 70, 1784 93, 2383 55, 2141 79, 3285 Spkr1 QS 105, 2103 119, 3002 170, 4276 75, 2629 75, 2921
RM 34, 1203 37, 1518 51, 1847 36, 1543 38, 1825 QM 47, 2188 55, 2530 65, 2729 63, 3068 66, 3414
Table X. The experimental results for map 3
Spkr 2 NMID MID:RS MID:QS MID:RM MID:QM
RS 136, 2382 164, 4132 213, 5404 90, 3643 154, 6567 Spkr1 QS 252, 5076 215, 5461 258, 6470 115, 4327 136, 5999
RM 69, 2749 87, 3679 113, 4311 90, 4048 125, 5891 QM 130, 5565 152, 6882 130, 5911 130, 6708 148, 7389
Node03 to Node10, and speaker B rejects the proposal. At the third turn, A againproposes a path from Node3 to Node04, which is accepted by B.
4.3. EXPERIMENTAL RESULTS AND DISCUSSIONS Table VIII, IX and X show the experimental results, the number of turns and char-acters, obtained by running the implemented dialogue system for the three kinds
of maps explained above. Variations of initiative are abbreviated as NMID (non-mixed initiated dialogue) and MID (mixed initiated dialogue), utterance types are R (request-based), Q (question-based), and the number of contents is S (singlecontent in a turn) and M (multiple contents in a turn).
4.3.1. The Characteristics of the Task The problem difficulty calculated by the ratio between the length of the shortestcommon route and that of each shortest route can predict the efficiency of a dialogue. The problem difficulty approximately reflects the search space for solvingthe problem. Thus, difficult problems, such as those using map 3 require more information exchanges than easy ones, such as those using map 1. The reasonthe difficulty ratios between the maps are not exactly reflected in the number of characters is that some information might be able to efficiently reduce the numberof solution candidates.
186180.tex; 2/08/1995; 8:41; p.10
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 89 4.3.2. Question-based vs. Request-based Question-based dialogues (route connection or precondition checking) are at theworst case two-times less efficient than request-based ones in terms of the number
of turns and characters. In the route finding tasks, proposing a route can functionas a check of the route connection, because the second speaker should reject the first speaker's proposal when the route is disconnected. Thus, in question-baseddialogues, precondition checking and proposal do the same job. This duplication is the reason dialogues with precondition checking are less efficient than thosewithout it. In spite of this fact, conversational participants tend to ask the other participants about the route connection in actual human-human dialogues in theroute finding task (Ishizaki, 1997). This can be explained by the Grice's maxim of quality `do not say for which you lack adequate evidence'. In our case, whenpeople use expressions of request, they should have some evidence for the truth of the content of the request. If human-computer systems are supposed to be used inthe situations where efficiency of interaction is the most important, friendliness (or naturalness) of interaction is not so important and some operation failures due tointeractional misunderstanding are not serious, request-based dialogues are better than question-based dialogues. 4.3.3. Single Content vs. Multiple Contents Dialogues with single content is less efficient than those with multiple contents inmost cases in terms of the number of turns, however the former is more efficient
than the latter in half of the cases in terms of the number of characters. The reasonfor this result is that multiple content dialogues need less number of turns, however, their efficiency is degraded by the contents based on the wrong assumptions (of theroute connection). Multiple content dialogues can be said to be suitable for the applications in which the communication cost is high, not for the applications inwhich the cost of utterances are high or there are many uncertain factors in the dialogue tasks. 4.3.4. Non-mixed Initiative vs. Mixed Initiative With easy problems such as those using map 1, the efficiency of mixed-initiative di-alogue is a little better than or equal to that of non-mixed-initiative dialogue. However, with difficult problems mixed-initiative dialogue is less efficient than non-mixed-initiative dialogue. In the case of easy problems, the agents can solve them by alternately proposing solution candidates, and the proposals can reduce fruitlesssolution candidates. In the case of difficult problems, however, the agents have more opportunities to propose solution candidates that make the partner searchuseless problem space.
Collaboration is expected to improve task performance, including the efficiencyof goal achievement. However, these simulations show that mixed initiative dialogue is not always more efficient than non-mixed initiative dialogue. In this
186180.tex; 2/08/1995; 8:41; p.11
90 MASATO ISHIZAKI ET AL. paper, we simulate simple utterances rather than simulating complex utterances,implementing various strategies of recovering from the misunderstanding and considering the communication costs. This is because we want to confirm a basic factabout initiative, in which as many unknown factors are excluded as possible. Which factors needs to be considered depend on the purposes of the applications. Ourfinding here establishes a starting point for the inquiry on initiative and the dialogue simulation techniques, and dialogue modelling developed here can be used as ameans to further examine initiative from empirical and theoretical perspectives.
5. Conclusion This paper confirmed a basic fact of whether mixed-initiative dialogue is alwaysmore efficient than non-mixed initiative dialogue on the basis of computer dialogue
simulation for the route finding tasks. We adopted the dialogue modelling pro-posed in Conversation Analysis (Schegloff and Sacks, 1974; Levinson, 1983) and Discourse Analysis a la' the Birmingham school (Stubbs, 1983; Coulthard, 1985;Stenstro"m, 1994). The definition of initiative is based on Whittaker and Stenton (1988). We implemented a system to simulate the dialogue model for the routefinding task, and instantiated variations of this modelling. The results showed that mixed initiative dialogue is not always more efficient than non-mixed initiativedialogue.
Acknowledgements We would like to thank Dr. Jean Carletta of the Human Communication ResearchCentre of the University of Edinburgh and Dr. Yasuharu Den of the Graduate
School of Information Science of Nara Advanced Institute of Science and Tech-nology for their helpful comments and suggestions.
References Ahrenberg, L., N. Dahlba"ck, and A. Jonsso"n: 1995, Coding scheme for studies of natural language dialogue. In: Proceedings of AAAI Spring Symposium on Discourse Interpretation and Generation.Stanford University, California, USA, pp. 8-13.
Akamine, T., O. Furuse, and H. Iida: 1994, A comprehensive Japanese sentence generation for spoken
language translation. Technical Report of Japan Society of Artificial Intelligence SIG-J-94-01, 135-142. (in Japanese). Allen, J., B. Miller, E. Ringger, and T. Sikorski: 1996, A robust system for natural spoken dialogue. In: Proceedings of the Thirty-fourth Annual Meeting of the Association for Computational Linguistics. University of California, Santa Cruz, California, USA, pp. 62-70. Anderson, A. H., M. Bader, E. G. Bard, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. McAllister, J.
Miller, C. Sotillo, H. Thompson, and R. Weinert: 1991, The HCRC maptask corpus. Language and Speech 34(4), 351-366. Cawsey, A.: 1993, Explanation and Interaction. The MIT Press.Chu-Carroll, J. and M. K. Brown: 1998, An evidential model for tracking initiative in collaborative
dialogue interaction. User Modeling and User-Adapted Interaction. 8(3-4), pp. 215-254.
186180.tex; 2/08/1995; 8:41; p.12
EXPLORING MIXED-INITIATIVE DIALOGUE USING COMPUTER DIALOGUE SIMULATION 91 Coulthard, M.: 1985, An Introduction to Discourse Analysis. Longman.Gross, D., J. Allen, and D. Traum: 1995, The TRAINS 91 dialogues. Technical report, Computer
Science Department, The University of Rochester. TRAINS Technical Note 92-1. Guinn, C. I.: 1998, Principles of mixed-initiative human-computer collaborative discourse. User
Modeling and User-Adapted Interaction. 8(3-4), pp. 255-314. Hasida, K. and Y. Den: 1997, A synthetic evaluation of dialogue systems. In: Proceedings of the First
International Workshop on Human-Computer Communication. Grand Hotel Villa Serbelloni, Bellagio, Italy, pp. 77-82. Ishizaki, M.: 1997, Mixed-initiative natural language dialogue with variable communicative modes.
Ph.D. thesis, The Centre for Cognitive Science and The Department of Artificial Intelligence, The University of Edinburgh. Levinson, S. C.: 1983, Pragmatics. Cambridge University Press.Moore, J. D.: 1994,
Participating in Explanatory Dialogues. The MIT Press. Paris, C.: 1993, User Modelling in Text Generation. Frances Pinter. Schegloff, E. A. and H. Sacks: 1974, Opening up closings. In: Ethnomethodology: Selected Readings.
Penguin Education, pp. 233-264. Smith, R. W. and D. R. Hipp: 1994, Spoken Natural Language Dialogue Sytems. Oxford University
Press. Stenstro"m, A. B.: 1994, An Introduction to Spoken Interaction. Longman. Stubbs, M.: 1983, Discourse Analysis. Blackwell. Tashiro, T. and T. Morimoto: 1995, A parsing toolkit for spoken language processing. Technical
Report of Information Processing Society of Japan SIG-NL-95-106, 67-72. (in Japanese). Walker, M. A. and S. Whittaker: 1990, Mixed initiative in dialogue: An investigation into dis-course segment. In:
Proceedings of the Twenty-eighth Annual Meeting of the Association for Computational Linguistics. University of Pittsburgh, Pennsylvania, USA, pp. 70-78. Whittaker, S. and P. Stenton: 1988, Cues and control in expert-client dialogues. In: Proceedings of the
Twenty-sixth Annual Meeting of the Association for Computational Linguistics. State University, New York at Buffalo, Buffalo, New York, USA, pp. 123-130.
186180.tex; 2/08/1995; 8:41; p.13
Copyright (c) 1999 Kluwer Academic Publishers