113:, in which users generally combine querying and browsing strategies to foster learning and investigation; information retrieval in context (i.e., taking into account aspects of the user or environment that are typically not reflected in a query); and interactive information retrieval, which Peter Ingwersen defines as "the interactive communication processes that occur during the retrieval of information by involving all the major participants in information retrieval (IR), i.e. the user, the intermediary, and the IR system."
421:
Koenemann, J. and Belkin, N. J. (1996). A case for interaction: a study of interactive information retrieval behavior and effectiveness. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Common Ground (Vancouver, British Columbia, Canada, April 13–18, 1996). M. J. Tauber,
142:
Because of its emphasis in using human intelligence in the information retrieval process, HCIR requires different evaluation models – one that combines evaluation of the IR and HCI components of the system. A key area of research in HCIR involves evaluation of these systems. Early work on interactive
213:
The techniques associated with HCIR emphasize representations of information that use human intelligence to lead the user to relevant results. These techniques also strive to allow users to explore and digest the dataset without penalty, i.e., without expending unnecessary costs of time, mouse
200:
In short, information retrieval systems are expected to operate in the way that good libraries do. Systems should help users to bridge the gap between data or information (in the very narrow, granular sense of these terms) and knowledge (processed data or information that provides the context
51:
in a series of lectures delivered between 2004 and 2006. Marchionini's main thesis is that "HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy."
262:
to automatically complete query terms and suggest popular searches. Another common example of lookahead is the way in which search engines annotate results with summary information about those results, including both static information (e.g.,
292:, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters for
444:
White, R., Capra, R., Golovchinsky, G., Kules, B., Smith, C., and
Tunkelang, D. (2013). Introduction to Special Issue on Human-computer Information Retrieval. Journal of Information Processing and Management 49(5),
229:
provide mechanisms for suggesting potential search paths that can lead the user to relevant results. These suggestions are presented to the user, putting control of selection and interpretation in the user's hands.
248:, like taxonomic navigation, guides users by showing them available categories (or facets), but does not require them to browse through a hierarchy that may not precisely suit their needs or way of thinking.
35:(HCI) and information retrieval (IR) and creates systems that improve search by taking into account the human context, or through a multi-step search process that provides the opportunity for human feedback.
201:
necessary to inform the next iteration of an information seeking process). That is, good libraries provide both the information a patron needs as well as a partner in the learning process — the
310:
is also considered a key aspect of HCIR. The representation of summarization or analytics may be displayed as tables, charts, or summaries of aggregated data. Other kinds of
79:
433:
Borlund, P. (2003). The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information
Research, 8(3), Paper 152
159:'s IIR evaluation model, applies a methodology more reminiscent of HCI, focusing on the characteristics of users, the details of experimental design, etc.
97:
held an
Exploratory Workshop on Information Retrieval in Context. Then, the first Workshop on Human Computer Information Retrieval was held in 2007 at the
358:
Marchionini, G. (2006). Toward Human-Computer
Information Retrieval Bulletin, in June/July 2006 Bulletin of the American Society for Information Science
205:— to navigate that information, make sense of it, preserve it, and turn it into knowledge (which in turn creates new, more informed information needs).
86:
131:
to the query. In this model, the system only presents the top-ranked documents to the user. This systems are typically evaluated based on their
116:
A key concern of HCIR is that IR systems intended for human users be implemented and evaluated in a way that reflects the needs of those users.
155:
but apply them to the results of multiple iterations of user interaction, rather than to a single query response. Other HCIR research, such as
240:, going from a category to its sub-categories, but choosing the order in which the categories are presented. This contrasts with traditional
156:
455:
Hearst, M. (1999). User
Interfaces and Visualization, Chapter 10 of Baeza-Yates, R. and Ribeiro-Neto, B., Modern Information Retrieval.
132:
167:
HCIR researchers have put forth the following goals towards a system where the user has more control in determining relevant results.
288:
the query results into a more human-consumable form. Faceted search, described above, is one such form of summarization. Another is
507:
502:
98:
190:
support the entire information life cycle (from creation to preservation) rather than only the dissemination or use phase
147:'s 1996 study of different levels of interaction for automatic query reformulation, leverage the standard IR measures of
83:
280:
help users digest the results that come back from the query. Summarization here is intended to encompass any means of
128:
78:
A few workshops have focused on the intersection of IR and HCI. The
Workshop on Exploratory Search, initiated by the
180:
have flexible architectures so they may evolve and adapt to increasingly more demanding and knowledgeable user bases
174:
no longer only deliver the relevant documents, but must also provide semantic information along with those documents
464:
Rocchio, J. (1971). Relevance feedback in information retrieval. In: Salton, G (ed), The SMART Retrieval System.
395:"Mira working group (1996). Evaluation Frameworks for Interactive Multimedia Information Retrieval Applications"
369:
177:
increase user responsibility as well as control; that is, information systems require human intellectual effort
64:
193:
support tuning by end users and especially by information professionals who add value to information resources
267:
about the objects) and "snippets" of document text that are most pertinent to the words in the search query.
394:
420:
293:
94:
273:
allows users to guide an IR system by indicating whether particular results are more or less relevant.
148:
32:
311:
307:
251:
136:
152:
202:
245:
60:
56:
24:
67:
sought to address the overlap between these two fields. Marchionini notes the impact of the
336:
72:
8:
222:
357:
270:
259:
241:
144:
110:
411:
Grossman, D. and
Frieder, O. (2004). Information Retrieval Algorithms and Heuristics.
184:
289:
285:
255:
90:
48:
226:
370:"Ingwersen, P. (1992). Information Retrieval Interaction. London: Taylor Graham"
432:
331:
281:
233:
218:
68:
477:
254:
provides a general approach to penalty-free exploration. For example, various
496:
486:"ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR)"
443:
301:
28:
319:
124:
315:
277:
237:
373:
314:
that allow users access to summary views of search results include
264:
398:
120:
123:
retrieval model, in which the documents are scored based on the
244:
in which the hierarchy of categories is fixed and unchanging.
135:
over a set of benchmark queries from organizations like the
297:
109:
HCIR includes various aspects of IR and HCI. These include
485:
183:
aim to be part of information ecology of personal and
75:– changes that were only embryonic in the late 1990s.
143:
information retrieval, such as
Juergen Koenemann and
80:
University of
Maryland Human-Computer Interaction Lab
91:
Special Interest Group on Computer-Human Interaction
478:"Workshops on Human Computer Information Retrieval"
187:and tools rather than discrete standalone services
27:techniques that bring human intelligence into the
494:
221:have features that incorporate HCIR techniques.
87:Special Interest Group on Information Retrieval
422:Ed. CHI '96. ACM Press, New York, NY, 205-212
55:In 1996 and 1998, a pair of workshops at the
353:
351:
426:
362:
414:
348:
495:
437:
236:enables users to navigate information
99:Massachusetts Institute of Technology
93:(CHI) conferences. Also in 2005, the
45:human–computer information retrieval
17:Human–computer information retrieval
84:Association for Computing Machinery
31:process. It combines the fields of
13:
23:) is the study and engineering of
14:
519:
470:
325:
119:Most modern IR systems employ a
82:in 2005, alternates between the
458:
449:
405:
387:
104:
1:
342:
308:Visual representation of data
227:automatic query reformulation
208:
503:Information retrieval genres
7:
294:Java (programming language)
95:European Science Foundation
71:and the sudden increase in
10:
524:
508:Human–computer interaction
214:clicks, or context shift.
196:be engaging and fun to use
65:human–computer interaction
38:
33:human-computer interaction
312:information visualization
137:Text Retrieval Conference
332:Exploratory video search
203:information professional
162:
133:mean average precision
61:information retrieval
57:University of Glasgow
25:information retrieval
337:Information foraging
223:Spelling suggestions
73:information literacy
276:Summarization and
271:Relevance feedback
246:Faceted navigation
145:Nicholas J. Belkin
127:of the document's
111:exploratory search
515:
489:
481:
465:
462:
456:
453:
447:
441:
435:
430:
424:
418:
412:
409:
403:
402:
397:. Archived from
391:
385:
384:
382:
381:
372:. Archived from
366:
360:
355:
256:web applications
49:Gary Marchionini
523:
522:
518:
517:
516:
514:
513:
512:
493:
492:
484:
476:
473:
468:
463:
459:
454:
450:
442:
438:
431:
427:
419:
415:
410:
406:
393:
392:
388:
379:
377:
368:
367:
363:
356:
349:
345:
328:
211:
185:shared memories
170:Systems should
165:
107:
41:
12:
11:
5:
521:
511:
510:
505:
491:
490:
482:
472:
471:External links
469:
467:
466:
457:
448:
436:
425:
413:
404:
401:on 2008-02-01.
386:
361:
346:
344:
341:
340:
339:
334:
327:
324:
238:hierarchically
234:Faceted search
219:search engines
210:
207:
198:
197:
194:
191:
188:
181:
178:
175:
164:
161:
106:
103:
69:World Wide Web
47:was coined by
40:
37:
9:
6:
4:
3:
2:
520:
509:
506:
504:
501:
500:
498:
487:
483:
479:
475:
474:
461:
452:
446:
440:
434:
429:
423:
417:
408:
400:
396:
390:
376:on 2007-11-25
375:
371:
365:
359:
354:
352:
347:
338:
335:
333:
330:
329:
326:Related areas
323:
321:
317:
313:
309:
305:
303:
302:Java (coffee)
299:
298:Java (island)
295:
291:
287:
283:
279:
274:
272:
268:
266:
261:
257:
253:
249:
247:
243:
239:
235:
231:
228:
224:
220:
215:
206:
204:
195:
192:
189:
186:
182:
179:
176:
173:
172:
171:
168:
160:
158:
154:
150:
146:
140:
138:
134:
130:
126:
122:
117:
114:
112:
102:
100:
96:
92:
88:
85:
81:
76:
74:
70:
66:
62:
58:
53:
50:
46:
36:
34:
30:
26:
22:
18:
460:
451:
439:
428:
416:
407:
399:the original
389:
378:. Retrieved
374:the original
364:
306:
275:
269:
250:
232:
216:
212:
199:
169:
166:
141:
118:
115:
108:
89:(SIGIR) and
77:
54:
44:
42:
20:
16:
15:
320:treemapping
286:compressing
282:aggregating
157:Pia Borlund
125:probability
105:Description
497:Categories
380:2007-11-28
343:References
316:tag clouds
290:clustering
242:taxonomies
209:Techniques
43:This term
445:1053-1057
278:analytics
252:Lookahead
149:precision
129:relevance
265:metadata
139:(TREC).
258:employ
39:History
153:recall
121:ranked
29:search
300:, or
217:Many
163:Goals
318:and
260:AJAX
225:and
151:and
63:and
21:HCIR
284:or
59:on
499::
350:^
322:.
304:.
296:,
101:.
488:.
480:.
383:.
19:(
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.