Using language as a foundation, the Laboratory for Language-Based Intelligent Systems is developing an artificial system that emulates human intelligence. Language is integral for human intelligence, therefore a language- based approach should be incorporated into any attempt to create a model of the human brain. Such an approach would necessarily be topdown because we still do not know how higher order functions of language are supported at the neuronal level. Moreover, a language-based approach allows us to observe internal characteristics of the brain's language system externally.
Language theory is the basis of any language-based approach. Employing systemic functional linguistics (SFL), we developed a computational model of the Japanese language system and an algorithm to understand and generate text. With this strategy we can build a brain-like information processing environment that should provide details about the specific neural architecture that processes language.
With a language-based approach to intelligence it is important to understand what is required by the brain to communicate using language and which expression methods already exist in the brain's language systems. We have learned through aphasia research (Yamadori, 1997) that the right cerebral hemisphere of a right-handed person is engaged in the understanding understanding of context, the left central region (including Broca's/Wernicke's areas) is engaged in the grammar, and the area concentrically surrounding the central region is engaged in understanding meaning. Evidence also suggests that non-human animals might use the right cerebral hemisphere to monitor context. Therefore an understanding of language in the brain based on context would be a viable theoretical approach to developing a languagebased computational system.
Systemic Functional Linguistics, or SFL, is a linguistic theory that considers grammar to be the mechanism by which meaning is expressed in context via phonology. Grammar and lexicon alone are not sufficient for a person to be able to communicate with language. In fact, putting the development of the larynx aside in the evolution of human language, the social context for communication is likely to have been present well before the meaning being expressed appeared. As the intended meaning became more complex, grammar developed that enabled the appropriate expression of meaning with words.
SFL was established as a linguistic theory by M.A.K. Halliday. He argues that it is the social context for communication that regulates the way the semantics of language are employed. He believes that social settings shape the development of language. This view opposes Noam Chomsky's theory of a universal grammar, which states that the machinery for language evolved in the brain and that the human brain is hard-wired for language from birth. SFL divides the language system into four strata: context, semantics, lexicogrammar, and phonology/graphology. As shown in Fig. 1, each stratum can be further divided into functional components or functions. Ideational, interpersonal and textual functions of language are considered in the semantic stratum. Here, ideational meaning refers to the way one uses representational tools to compose the idea. In the context stratum for example, functional components are concerned with field (what is going on in the communication), tenor (the social roles and relationships involved) and mode (the medium for communication).
In addition to having a theory of language, a computational model is required to develop any language-based system for a computer. At the foundation of our model is a database called the Semiotic Base. The Semiotic Base uses SFL's strata concept and contains four separate databases: an expression base (phonology/graphology), a wording base (lexico-grammatical), a meaning base (semantic) and a context base. Fig. 2 shows part of the meaning base. The semantic system in SFL is referred to as a system network which is a tree structure with nodes all along the network called meaning features. Meaning features become more detailed as the tree extends toward the right.
Any detailed system network is a meta-language for expressing meaning. The notion of meta-language is important because language cannot be used to represent the meaning of itself. It is the prerequisite for any model of the brain's language system. In the system network of meaning, for example, each square contains a realization statement that indicates which lexico-grammatical features should be used to convey the intended meaning feature. The lexico-grammatical network is similarly expressed with lexicogrammatical features. As above, meaning and lexico-grammar remain closely related. Situations, knowledge, and text examples from the Corpus database as well as descriptions of field, tenor and mode are stored on the fourth database, the context base. There is also a dictionary of lexical items.
Using the Semiotic Base, our laboratory developed a computational model for textual understanding and generation. The generation process involves selectively specifying meaning features in the semantic system to express speech content in a given context and converting them to text using the lexico-grammatical system and lexical items. If generation is a forward process from meaning to text, understanding is its inverse process from text to meaning. In this situation, the understanding process begins with morphological and dependency analyses, advancing to lexico-grammatical analysis and then semantic analysis before moving to a conceptual analysis.
To verify the functionality of our computational model, we will use paradigm that implements a language- based computing environment using everyday language to execute commands and perform functions. Using our computational model, the computer would manage and execute all information processing tasks with language in a way that is similar to the human brain (Fig. 3). Overtime this system would build prototypes of a language word-processor, a smart help system and a language-based programming system. These tools would utilize understanding/generation functions based on the language model thereby allowing the user to interact with the machine in everyday spoken language. The language word-processor would to create documents based on voiced instructions. If the user says, “Write a message greeting,” the computer would generate the message. If told to “Emphasize this section” or “Insert a suitable picture,” the computer would comply. The systems that were presented at the Japanese Society for Artificial Intelligence in June 2004 were favorably received.
Language functions of the brain
In addition to memorizing a language system, the brain has three other language- related functions: processing, utilization and learning. Language processing involves understanding messages and generating responses. Thus far, we have constructed a computational model that can process a human language. The next step will be to construct a neuro-computational model for language processing that can be embedded in a neural system.
Utilization includes recognition, thought, and the interpretation of experiences through language. Engineering an artificial representation of language utilization would require a model for thinking. Constructing a languagebased computational model of thinking is central to the creation of a human brain and our research on language- based computing will be useful in this effort. While language learning is not a theme in our research, we feel that, if we intend to develop a language- learning model, it is necessary to define the targets of learning. In order to communicate with language, the brain must learn a stratified language system, how to understand and generate text, and naturally the lexis.