LISA (Learning and Inference
with Schemas and Analogies)
The central motivation in
this model is to integrate two major processes of analogy formation: memory
access and structural mapping - while preserving both flexibility and
sensitivity to structure.
Traditional symbolic systems
maintain structure but are inflexible and the connectionist systems are just
the reverse – they are very flexible, but work only at low level. Past hybrid
models have lacked a natural interface between the two. The LISA model attempts
to reconcile the two approaches and unify access and mapping with both
structure sensitivity and flexibility.
LISA can be divided roughly
into two interacting systems: a "working memory" (WM) and
"long-term memory" (LTM). LTM is a layered network of
"structural" units, and the bottom structural layer connects to WM's
single layer of semantic units.
Concepts and relations (in LTM) are represented as trees of structural units of three types: propositions, subpropositions, and objects/predicates.
Concepts and relations (in LTM) are represented as trees of structural units of three types: propositions, subpropositions, and objects/predicates.
Each proposition tree in LTM
is a potential "analog"-- the source or target of an analogy.
The semantic units of WM connect to and allow distributed representations of each object or predicate at the bottom of an LTM proposition tree. The more similar two objects/predicates are, the more semantic units they will share.
WM also includes a set of "mapping" links between LTM structure units of the same type (eg. predicate-predicate, proposition-proposition).
The semantic units of WM connect to and allow distributed representations of each object or predicate at the bottom of an LTM proposition tree. The more similar two objects/predicates are, the more semantic units they will share.
WM also includes a set of "mapping" links between LTM structure units of the same type (eg. predicate-predicate, proposition-proposition).
Activity starts in LTM, in a
particular proposition unit chosen as the target analog. Flashes of activity
spread alternately down various competing branches of the driver's structure
units and activate patterns of semantic units in WM. These semantic units
activate "similar" objects and predicates, and activation spreads
back up competing branches of other "recipient" analogs. Recipients
which are most strongly activated ("retrieved" from LTM) at any
moment are considered the best "source" analogs for the original
target.
When structure units of the same type are active concurrently, the WM "mapping" weight between them strenghtens; when structure units are uncorrelated, the connecting weight is weakened.”
When structure units of the same type are active concurrently, the WM "mapping" weight between them strenghtens; when structure units are uncorrelated, the connecting weight is weakened.”
As bottom line, I could say it seems beautiful model to me. It has long and working
memories, semantic network. It is a hybrid model that works in parallel and
combines the advantages of the both perspectives. It satisfies all analogical
constraints: pragmatical, semantic, so on. I would like to see this and the
others models of analogy making and reasoning used in the creation of one advanced
AI model that competes and overpasses the human abilities. I am sure it will
happen soon with the efforts of the scientists!
Orlin Baev, psychologist
Няма коментари:
Публикуване на коментар
Здравейте, приятели! В случай, че желаете да ми зададете въпрос и очаквате отговор, моля пишете ми тук в коментарите!