Mind Maps

Abstract:  This article concerns a problem with large collections of information. On the one hand, the searcher needs to have recall (the ability to bring up everything on a topic.) This is the chief goal of search engines, and it is one they have not come close to meeting even now.

On the other hand, the searcher needs to have precision (the ability to find the specific piece of information that contains the answer to a question.) This is not even a goal of search engines, and compared to databases created by and for librarians, search engines have failed miserably at it. Mindmaps were an attempt to use current technology of the time to solve that problem.

An even bigger issue remains unanswered by my proposal for mindmaps. It is particularly difficult to solve because of the desire to game search-engine ratings. That problem is guaranteeing the authoritativeness of any result you get back from an online search.

NKH July 2, 2011

Volume 7, No. 6 • June 1999 • FEATURE • The Searcher
“Mind Maps: Hot New Tools Proposed for Cyberspace Librarians”

by Nancy K. Humphreys

Experienced searchers are the explorers and sense-makers of cyberspace. The new virtual library built around Net-based online sources is not just changing our methods of working, it is also opening up new sources of income. Freed by the Internet from geographically-bounded libraries, searchers now can supply service and receive payments from communities of people sharing common interests and passions, though scattered all over the world. Such communities on the Internet are legion and can be found in the form of listserv groups, newsgroups, chat room members, as well as company intranets and extranets. All such groups have this in common: Each has a shared set of concerns and need information.

Online payment is in its infancy, but we can expect an abundance of options in the near future that support new funding structures. For example, smart cards that insert into the computer and keep track of financial transactions are under development. These cards will make it possible to record transactions that cost just pennies as well as those costing large sums. Soon we should be able to plug in a smart card worth a set amount and let it take care of paying for the client’s articles and Web downloads, along with our professional fees. ISPs and telecommunications services have begun adding new services and payment options as quickly as possible.

Taking a peek into the future, here is my wildest fantasy. I dream of becoming the online Springsteen searcher. I have my bookshelf full of books on my favorite rock singer and drawers of articles I have collected over the years, all indexed in my extensive personal database. In my fantasy, 2,000 listserv members chip in a tiny amount via their ISPs each month, and the ISP of the Springsteen listserve pays me for on-demand, on-call answers to members’ questions. I can answer either by using my personal collection or my Internet searching expertise. Chat room hosts pay me for answering questions within specific scheduled hours. My information expertise and my bank account have converged in a wonderful way. I am the Springsteen librarian on the Internet.

What We Need

This kind of fantasy could soon become a reality. With a new kind of online tool that has become feasible, professional searchers can better connect the needs of a client with a precise Web site or part of a Web site containing the answer. This new online tool is a modification of a traditional library tool that has existed for centuries: the book index.

For many years library organizations such as OCLC have researched how to use back-of-the book indexes to augment library subject catalog records. Software has been designed that uses tables of contents and indexes from books to search databases such as library catalogs or lists of books sold by sites like Amazon.com. I propose that we think bigger. I suggest that we use selected back-of-the-book indexes and existing software to help search the biggest database of all — the entire World Wide Web.

Let’s see how online searchers could use a software tool based on back-of-the-book indexes to assist us in dealing with search engine databases compiled by spiders that crawl across the entire Internet or that crawl through deep Web sites. By deep Web sites I mean sites so large that the visitor must use a search engine to find information in them. The end result of using this new software tool would be faster and more accurate end-user searching.

For online searchers, this approach offers the financial benefit of being able to shift from serving individual clients to serving groups of clients who repeatedly need information on topics of great interest. For librarians, this means potential jobs as consultants who organize the subject matter of Web sites for easier, faster access. For back-of-the-book indexers it means a secondary product which can be sold to search engine producers or large Web site developers on a flat-rate or royalty basis. The customer would gain inexpensive, more efficient results, and Web site developers would gain improved usage and revenue from more Internet advertisers.

Why Spiders Need Human Intervention

Subject indexing is a human skill which computer people often mistake for something a machine can do. When Internet search engines began spidering, their developers thought that they could automate online searching completely and eliminate the need for any human intervention. However, search engine producers soon realized that their spiders needed expensive human help to add the selecting and sense-making that users needed. So the search engines began adding directories of Web sites indexed by human beings and began ranking sites by popularity. Some search engine producers even began taking money to list sites. All these actions were responses to the burdensome lack of precision in databases created by the search engine’s spiders.

“More like this” and “try these terms” cross-referencing systems help searchers narrow down their results. Intelligent agents and cookies try to predict what sites will interest users based on patterns of previously visited sites. Users who fill out subject interest forms can have automated profile-searching done, sometimes in return for providing demographic information to marketers via biographical information forms. Each of these innovations has helped, but Internet searching still remains largely a cumbersome process fraught with “false drops” (e.g., retrieving information on the edible kind instead of the computer, or vice versa) and way too many hits.

We have all run into problems full-text searching documents not indexed by humans: words that look alike and mean different things; meaningless parenthetical phrases; complex concepts that take more than a few words to pin down. Sometimes computer searching solves these problems easily. Search engine producers use algorithms, term-weighting, the context of a topic, and the hints provided by use of punctuation such as commas or parentheses or capital letters as clues to what the searcher wants. Nonetheless, the search engines still lack the precision searchers hunger for.

I would argue that what really makes indexing and search retrieval difficult to automate are two things that human indexers do and machines do not. One is to consider the audience for a document, whether book or Web page. The other is to keep a mind map or “syndetic structure” in mind as a document is indexed.

Human Versus Machine Knowledge

Knowing the audience is the most crucial element in indexing. With a book meant to be read by college faculty, the indexer will use academic jargon, while a book meant for lay people will lead to more natural language. Thus, the document’s intended audience determines the indexer’s choice of language. The audience also determines what is put in and what is left out of an index. For a trade book, the audience may want to find every mention of a particular subject indexed, e.g., a celebrity’s secret lovers. On the other hand, for a textbook, the indexer may only identify a subject when it is first defined by the book or when the topic of a major discussion.

The second limitation of automatic indexing that a new search tool could correct is the search engines’ lack of syndetic structure. This fancy term refers to the structure that human indexers use to provide an index with cross-references. Many search engines provide see-also cross-references at the top of the page when results from a search are returned. Often these see-also suggestions indicate the extreme “confusion” of the search engine when asked to look for a term that has many meanings or many uses in different areas (or domains) of knowledge. The human mind has the capacity for much more complex cross-referencing of terms than any search engine in existence.

The British Society of Indexers’ indexing course materials offers an example for this kind of cross-reference structure in the human mind. It points to the mental structure we carry of a classical music orchestra. We know that a European orchestra has sections — strings, horn, percussion. The sections subdivide into instruments — violins, clarinets, drums. These instruments may further subdivide — violin into first and second violin. An example I liked to use when I gave library orientations was a biological hierarchy: animals; mammals; apes; gorillas. Indexers hold these hierarchical structures of terms in mind and make references between different levels of the structures, e.g., violins, See also first violins, second violins, or between terms on the same level, e.g., gorillas, See also orangutans.

There are actually two types of syndetic structures in the human mind. One is the shared structure of knowledge about subject domains usually taught to us in schools. The other is the very personal syndetic structures that each individual carries in his or her mind. Librarians and searchers are experts in understanding the latter kind of syndetic structure. After years of being asked questions by patrons or clients, we often know where to look for an answer as soon as the client begins asking a question.

A knowledge of what the user (e.g., the searcher’s client) needs and a syndetic structure of terms are the key factors that Internet search engines lack. Unfortunately, expertise in understanding a client’s mental structures is not deemed as important as the shared structures of knowledge about particular subject domains that academics and other experts specialize in. It is not just subject expertise, but the kind of expertise that librarians, searchers, and indexers have about user-needs that Internet search engines so badly lack. This is why we need a new tool.

Searchers who use search engines should be able to make use of back-of-the-book-indexes while searching online. These indexes, containing terms reflecting both the formal syndetic structure of the subject matter covered by the book and the indexer’s subjective additions of terms they think that other readers will use to find information, could be turned into software “mind maps.”

To create mind maps, we could transform book indexes into generic lists of terms about a particular topic. The mind map would also include cross-references that indicate the relationships between the terms in the list. In this way selected book indexes would become online mini-thesauri on very specific topics. Internet search engines could then connect these mini-thesauri or mind maps to their databases by use of software called middleware, software which connects front-end user interfaces, such as Web browsers, with back-end databases.

Lastly, to link Web pages, or parts of Web pages, with mind maps, we would use a new coding technology, called XML or eXtensible Markup Language. Developers of search engines and Web pages have increasingly begun adopting XML. The new 5.0 versions of Netscape’s Navigator and Microsoft’s Internet Explorer browsers have introduced XML support.

If my proposed vision came true, searchers would select a particular mind map or combination of mind maps as guides before setting out on a search of cyberspace. The topics covered by mind maps will narrow the scope of the search to a very small domain of knowledge, e.g., travel to the Mediterranean, Indian cooking, day-trading, investment in foreign currencies, or teaching English as a second language. The experienced searcher will use an understanding of the client’s needs to choose and use the right mind map(s) for Internet exploration. Eventually these new technologies — mind maps, middleware, and XML — might even join with voice recognition technology and let searchers speak, rather than keyboard (type), their requests.

The Mind Map

Before getting into what middleware and XML do, let’s start with the idea of a mind map or mini-thesaurus. A thesaurus shows the cross-reference structure for a particular area of interest or domain of knowledge. The ordinary Roget’s or other printed thesaurus lists a word or a phrase along with lots of other suggested terms. These terms can include synonyms, broader terms, narrower terms, or terms describing related things on the same level of the hierarchy, e.g., orangutans and monkeys. A thesaurus can also show terms not used.

While most cross-reference structures reside only in the human mind, librarians and other online searchers sometimes use structured thesauri available in print or electronic form. However, these thesauri usually cover whole fields of knowledge, such as women’s studies, architecture, psychology, or medicine. The indexes in the back of books, on the other hand, cover only the topics discussed in the book. Usually these indexes are very specific and precisely targeted to serve specific audiences. Mind maps built around them could target specific groups of users and specific interests. One could easily code the mind maps to indicate specific audiences, e.g., experts, laypersons, children, etc.

The American Society of Indexers’ newsletter periodically reviews and hosts advertisements for several indexing software products. Experienced freelance indexers tend to use Macrex or Cindex (pronounced see-index), the oldest of these products. Macrex and Cindex automate much of the work of indexing. For example, they check the cross-references that indexers put in to make sure each see-also reference points to an existing term in the index. Both of these programs create output in various file formats so that indexes can be imported into word-processing software. Newer editions of indexing software can prepare indexes for use on the Web.

Creating a generic mind map from an existing book index would require the indexer to delete references from a finished index that apply only to the book containing the index. The indexer might also add extra terms and cross-references to make the mind map a more complete guide to the topic. At present, many publishers leave copyright ownership to the indexer. With the existence of mind maps, we might expect a different arrangement. Publishers would want more control over the indexes to their books. This would be especially true if mind maps were to use the titles of well-known books, e.g., the Mac Bible, Anybody’s Bike Book, or The Joy of Cooking, and were updated as new editions of the books were published.

Who, you might ask, would bear the expense of hiring indexers to convert back-of-the-book indexes to mind maps and of programming middleware to make mind maps usable with Web browsers? The most obvious answer that comes to mind is third-party software producers. If mind maps were made available for sale to experienced online searchers and to amateur searchers very interested in a particular topic, software producers would have an incentive to work with publishers or indexers to obtain indexes, modify them, and then sell them to searchers. However, this would still not constitute a very large market and could insure high-priced products.

Other groups have an even greater economic interest in both professional and novice searchers — the advertisers who support Internet search engine portals and deep Web sites. Advertisers on the Internet face a difficulty in targeting their particular products and services to small groups of highly interested consumers. Advertisers who advertise at the universal portal sites such as Yahoo!, AltaVista, or HotBot reach huge numbers of people, but most of those people will never want their products. These advertisers pay a high price to reach people who are not potential customers.

Mind maps would change this situation. With the adoption of mind maps, the searcher would encounter multi-tiered advertising on deep Web sites, Internet search engines, and ISPs, such as America Online, that also serve as Internet portals. When entering a deep Web site or an Internet portal, the user would see general advertisers, such as Amazon.com or Hallmark greeting cards, wishing to reach large numbers of people.

Once a mind map was selected, however, the advertising on the portal or on the deep Web site would narrow to products targeted to just the specific group(s) of people interested in the topic of the mind map. This focused advertising might even offer products and services our clients might want to know about. Because use of mind maps would benefit and increase the number of their advertisers, I think that search engine providers wanting to serve as portals to information in cyberspace and deep Web sites would find it in their interest to make mind maps available free to those who search their sites.

Middleware

Publishers’ sales records for their books would undoubtedly serve as a good guide to the best possible indexes to use for mind maps. Let’s assume that many mind maps have been developed for use while searching. Now let’s take a hypothetical searcher who has a client needing information on how to start a conference planning business. The searcher does a full-text search and instantly finds that the words ,
, and return a huge amount of irrelevant material on any search engine chosen. Now let’s assume that the searcher can use a single mind map or a combination of mind maps. A list of these mind maps appear in the searcher’s browser window.

A mind map on “starting and running your own small business” might be a good choice for this query. Along with this, a mind map based on a book about “what conference planners do” might help. If no such mind map about “what conference planners do” exists, the user might select a mind map based on a library reference book such as the Dictionary of Occupational Titles. The searcher might also click on a mind map on “hotel management.” The creative searcher can think of many more possibilities.

With many mind maps eventually available on many different topics, one would soon need a reference tool or organizer to help searchers find the right mind maps. Fortunately, librarians have long had experience with creating systems of access to information. Librarians could assist producers of deep Web sites or Internet portals needing to offer such a system of access to mind maps for their users. Librarians could make it easy for the searcher to browse through and choose mind maps. This would be particularly cost-effective if Internet portals or other Web site developers collaborated to build a separate “library” housing mind maps that all could share.

For example, I picture a hyperlink that takes the searcher into a room with books on the shelf that looks like a library. The searcher can peruse the shelves and open the “books” representing mind maps to see what is inside them. Inside the “book” they would find a scope note for and list of terms in that mind map. The searcher chooses mind maps and “checks them out” of the library to use when searching. Or the library could contain icons of scrolled maps that users could unroll.

For mind maps to work with particular search engines, the producers of individual search engines or Web sites that use search engines will need to create middleware to connect generic mind maps with their particular flavor of search engine.

In the case of search engines, one would want the artificial intelligence that is added to the middleware and attached to the mind maps by computer programmers to be proactive, in the manner of AskJeeves [http://www.askjeeves.com]. The search engine won’t provide a list of suggested terms at the top of the screen. Instead, when a person looks up violins it will respond, “Do you want information about first violins and second violins too?” Or it will ask the searcher, “Do you want to know about orangutans, chimpanzees, and/or gibbons as well as gorillas?”

Mind maps will aid the searcher in picking out additional or more precise terms for a search. Mind maps could solve most homonym confusion that currently afflicts search engines. There will be no need for a computer to ask whether the searcher wanted a mouse with legs or buttons — the mind map chosen by the searcher when beginning a search will tell the computer which type of mouse is meant.

With searching made so much quicker and easier, a searcher could concentrate on becoming an expert in the use of particular mind maps on topics of the most interest to the searcher. Let’s look next at the role of those needed to produce mind maps that are representative of the terrain found on real Web sites. This too could be a job for either searchers, librarians, and/or indexers.

How to Do It: XML (eXtensible Markup Language)

XML is an new technology standard that enables programmers to take documents in an older (legacy) system, strip the formatting information, and convert the documents into the formatted form that browsers expect to see. In addition, XML offers a method of coding parts of a document such as the heading of a letter, a table, a signature, or a form. This speeds up searching because only the part of the document requested by the searcher needs to be sent over the Internet. Currently, the entire document has to be sent. To complete a searcher’s request by sending a whole page or document takes a longer time for an Internet server and leaves the searcher with more unnecessary text to wade through before reaching the sought-after information.

Those of us interested in subject access to information should know another important thing about XML: One can use XML to index what databases label as “fields,” like “author,” “book review,” or “book title.” There are 15 basic metatag fields, called the Dublin Core, and more coming, depending on agreements among users of metatags. Web site developers can use XML to recognize specific fields in documents and retrieve specific bits of data for searchers. For example, an Internet searcher could ask for tables of particular types of mutual fund yields over the past 5 years, or request book reviews of a particular book written by a certain author.

XML tags define fields, but it would take far too much work to tag each of the subjects that a Web site covers. But without tagging the subjects covered by a Web site, we must continue to depend on full-text searching by the computer and all the attendant miseries this brings when searching large databases. The mind map would solve this problem. People developing Web sites could tag their sites with the names of a few germane mind maps. From then on, the mind maps would automatically expand into a larger controlled vocabulary that one could use for searching Web sites.

XML has another exciting feature: It defines elements that contain other elements. In other words, XML has a built-in hierarchical syndetic structure. The form definitions used by XML are called Document Type Definitions or DTDs. With DTDs, XML enables the developer to establish automatic cross-references. For example, an XML Document Type Definition might be Java=”Programming Language.” This bit of code specifies the definition of the type of document along with the broader heading for that type. Such tagging could also link mind maps associated with a document in a hierarchical, thesaurus-like fashion.

In other words, a document on a Web site could have a DTD that indicates the most specific mind maps that the Web site developer thinks correlates to his/her document. For each of these specific mind maps, the DTD will indicate a broader mind map. Should the searcher not find the information they seek using a specific mind map, the portal or deep Web site the searcher uses could offer an option of automatically going to the next broader level of mind maps, as defined by the DTDs of the originally chosen maps, and repeating the search or a modified version of the search.

XML can do this because XML allows a Web browser to recognize tags with content data about an object such as a Web page. XML tags refer to objects that can be made to do things by developers. For example, XML script can associate a term like with the programming command. In this way XML tags can support actual programmed commands. If they treated mind maps as objects, object-oriented software programs built with XML could automatically retrieve and manipulate data.
XML and Metatags

HTML is the dominant format used for placing contents of a Web page on the browser’s screen. HTML does not describe the contents of a Web page. Currently metatags are the only way to add controlled language, subject indexing to Web pages during creation. Metatags contain data not found in the document or Web site. Metatags could include the price of a listed item, or some kind of historical information about a picture on the page or data about the producer of the page. Metatags do not show up on the browser window, but the search engines read them.

Unfortunately, many search engine spiders ignore metatags because of the misuse of metatags by those wishing to get their sites listed at the top of search engine results. Misuse includes repeating words over and over or using words which really have nothing to do with the Web site but are of interest to people, e.g., “sex.” XML is actually a hierarchical framework for metatagging metatags. This means that XML too could have misused and improperly assigned tags.

With mind maps, Web site developers will have an alternative to using deceptive practices in order to rise to number one on Internet portals’ hit lists. These same developers can reach their audience and/or please their advertisers by tagging sites with relevant mind maps. The impetus to cheat at metatagging will be far less, when a mind map lets the Web site developer precisely target and reach the audience most interested in the site and its advertisers.

Those who continue to cheat will find that they have less opportunity to do so. More and more searchers will use mind maps. The numbers of people using the initial search engine box will dwindle. In addition, the mind maps will be controlled by the indexers and publishers who create them and by the search engines that use them. It should be fairly easy for the search engine spiders that crawl Web sites to automatically ascertain whether the terms on that Web site match those of the mind maps claimed by the site to represent its content. Cheaters can be blocked from the search engine.

Legitimate large-scale advertisers will have the new option of paying an Internet portal to advertise their product to just those portal users who choose a particular mind map when searching. Apple Computer, for example, might want to advertise to searchers who use the “Macintosh” mind map on a portal like Yahoo!. This will give the major search engines a financial incentive to offer tiered advertising along with free mind maps to searchers. Portals and deep Web sites can put the most general advertising on the first page of the site where many people will see it, while offering targeted advertising rates for specific audiences identified as users of particular mind maps.

XML-enabled browsers could use properly developed and tagged mind maps for searching. The logical place to start this process would seem to be with deep Web sites. Unlike Internet portals, deep Web sites have complete control over search engine results because the developers control the coding of all information on their sites. Deep Web sites will find that as mind maps are used to facilitate searching, specific audiences can be more narrowly targeted. This will attract more advertisers to their sites by making the advertisers’ dollars more effective.

Once XML and mind maps are in widespread use, metatags could finally serve the purpose for which they were originally conceived: to add explanatory data to a Web site. When search engines like Infoseek or MetaCrawler set their spiders to check metatags, they might then find the metatags usefully adding terms of relevance to the site that do not appear in the site’s text or in its associated mind maps. For example, complex discussions often do not actually name what is being talked about, and some way of indicating this is needed. For example, when Bruce Springsteen discusses Elvis Presley in an interview, he may be indirectly referring to a promise he made to fans many years ago, namely, that Bruce would try to avoid Elvis’ fate by keeping his connection with the audience alive. The words “promise to fans” may not appear in the text, but a Springsteen site might add it as a metatag.

BOSSY METATAGS

This is an example of egregious meta-tagging on a site which appeared six times on the top-12 hit list for Springsteen on Northern Light’s power search.

<HTML><HEAD>
<META Name=”KEYWORDS” Content=”SPRINGSTEEN BRUCE E STREET BAND CLARENCE LITTLE STEVEN BOSS SPRINGSTEEN BRUCE E STREET BAND CLARENCE LITTLE STEVEN SPRINGSTEEN BRUCE E STREET BAND CLARENCE SPRINGSTEEN BRUCE E STREET BAND SPRINGSTEEN BRUCE SPRINGSTEEN SPRINGSTEEN BRUCE BRUCE E STREET BAND E STREET BAND E STREET BAND CLARENCE CLARENCE CLARENCE CLARENCE LITTLE STEVEN LITTLE STEVEN LITTLE STEVEN LITTLE STEVEN LITTLE STEVEN BOSS BOSS BOSS BOSS BOSS BOSS”>

Summary

So, let’s put it all together. Let’s say Ziff-Davis, publisher of a deep Web site on computer-related information, wants to make its site more accessible to searchers. Ziff-Davis uses a search engine on its site and already employs Direct Hit technology to place the most popular parts of the site at the top of the hits for every search. The searcher interested in then gets the list of what previous searchers for that topic have chosen to look at most often or for the longest time. But let’s say that still doesn’t do it for the present searcher.

After Ziff-Davis adds mind maps on its site to index all its articles, the searcher interested in flat screen monitors can also use mind maps to get at the particular piece of information desired. Ziff-Davis or any other deep Web site would simply need to tag an article or a part of an article as belonging to particular mind maps. In other words, the person posting an article to the Ziff-Davis site would use an XML tag to code the article or a specific part of the article as being relevant to particular mind maps that Ziff-Davis makes available to those who search its site.

Let’s say there is an article on the site about the history of the Macintosh mouse. This article could carry XML-tags that identify it as germane to the “history of personal computers” mind map, the “Macintosh” mind map, the “hardware peripherals” mind map, etc. In addition, remember that XML DTDs will enable the programmer to make hierarchical links between mind maps. If the searcher looking for information on the history of the Macintosh mouse does not find what they need in the specific articles retrieved, the searcher could ask the system to repeat a modified version of the search using DTD-linked broader mind maps such as “history of computers,” “Apple Computer Inc. computers,” and/or “computer hardware.”

The middleware developed for connecting mind maps used by Ziff-Davis to the searcher’s browser would then kick in. This middleware will automatically filter all the words in the tagged article or tagged part of the article on the Ziff-Davis site through the list of terms in the mind map. It would mark any terms in the article or part of the article that match the terms used in the designated mind map(s). The search results would list any article or part of an article that uses a matched term used by the searcher.

The searcher on the Ziff-Davis site using the “Macintosh” mind map to find information about the term , for example, will automatically retrieve articles on the one-buttoned Mac mouse along with the three-button PC mouse. Mind maps, by allowing use of controlled vocabulary for searching, will enable deeper and quicker searching. For example, the searcher using the “Macintosh [computer]” mind map can type or and be assured that the articles retrieved will relate somehow to Macintosh computers.

For small Web sites, the process of accessing sites by use of mind maps would necessarily involve the major search engines and Internet portals. When small Web site owners contact an Internet search engine to list their site, the search engine would give them access to the mind maps used by that search engine. Web site developers would then pick the most specific mind maps to represent the content on their sites.

For example, if my site had information about peer-to-peer computer networking for small businesses, I might ask the major search engines to list my site with a “how to start and run a small business” mind map and with a “peer-to-peer networking” mind map. I’d make sure to use terms from these mind maps in the content of my site. When the Internet search engines automatically code all the words used on my site with the terms used in either of those two mind maps in response to a query initiated by a searcher using one or both of the maps, my site will have an excellent chance of getting the searcher’s attention.

Mind maps would benefit Web site developers by enabling searchers most interested in a specific site’s topics to find them. Mind maps would benefit Internet portals and deep Web sites by attracting more advertisers. Advertisers would benefit because the ones who need to reach very targeted markets and who could not afford mass audience rates could now reach smaller numbers of people most likely to want their products. The end user or client would also benefit from the special expertise about the way people look for information that librarians, indexers, and searchers bring to mind maps on their behalf.

One might object that efficient use of mind maps requires the searcher to have some prior knowledge and a context for what they seek. In the example above, the searcher must know that Macs and PCs do not use the same kind of mouse. I would argue that most searchers do have some context when they begin searching. Professional searchers certainly do not begin a search based on a word for which they have absolutely no context. Even for a foreign word or academic jargon, the searcher interviews the client to get some idea about the term’s point of reference.

Professional online searchers would have the advantage in using mind maps, because we know enough about the Internet and our clients’ needs to select the best mind map(s) for searching the Internet or deep Web sites. The professional searcher’s query will also speed up as the use of a mind map diminishes the number of Web sites and/or pages that need investigation. In the above example, using the “Macintosh” mind map would considerably reduce the area of the Ziff-Davis Web site needing searching.

Precision becomes especially important when it comes to retrieving articles from proprietary sites. The user who thinks to pick up mind maps on “travel” and “doing business abroad” before searching for articles about packing a suitcase for a Malaysian business trip on a newspaper site that charges per article will be rewarded. That searcher will make a customer a lot happier than the searcher who must search the entire newspaper site without such a map as a guide. Packing s a word with many meanings, while suitcase is a word that appears in many contexts besides travel articles, e.g., luggage ads, computer typography, money laundering, etc.

With the aid of mind maps, the efficient searcher will gain more clients and keep established clients coming back. With mind maps, searchers as well as clients may feel more pleased with the results obtained from searching. Mind maps will also make specialization in particular subject areas or on particular Web sites more possible. As a result of mind maps, the dream I posed at the beginning of this article could lie within reach of many searchers. We can chose to remain generalists if we wish, or we can become Internet librarians with expertise in special areas of the most interest to us.

Searcher Home Page