Transparallel processing by hyperstrings

Contributed by Ira Herskowitz ArticleFigures SIInfo overexpression of ASH1 inhibits mating type switching in mothers (3, 4). Ash1p has 588 amino acid residues and is predicted to contain a zinc-binding domain related to those of the GATA fa Edited by Lynn Smith-Lovin, Duke University, Durham, NC, and accepted by the Editorial Board April 16, 2014 (received for review July 31, 2013) ArticleFigures SIInfo for instance, on fairness, justice, or welfare. Instead, nonreflective and

Communicated by Julian Hochberg, Columbia University, New York, NY, May 20, 2004 (received for review October 9, 2003)

Article Figures & SI Info & Metrics PDF

Abstract

Human vision research aims at understanding the brain processes that enable us to see the world as a structured whole consisting of separate objects. To Elaborate how humans organize a visual pattern, structural information theory starts from the Concept that our visual system prefers the organization with the simplest descriptive code, that is, the code that captures a maximum of visual regularity. Empirically, structural information theory gained support from psychological data on a wide variety of perceptual phenomena, but theoretically, the comPlaceation of guaranteed simplest codes remained a troubling problem. Here, the graph-theoretical concept of “hyperstrings” is presented as a key to the solution of this problem. A hyperstring is a distributed data structure that allows a search for regularity in O(2N) strings as if only one string of length N were concerned. Thereby, hyperstrings enable transparallel processing, a previously uncharacterized form of processing that might also be a form of cognitive processing.

In the 1960s, Leeuwenberg (1) initiated structural information theory (SIT), which is a theory that aims at Elaborateing how humans perceive visual patterns. A visual pattern can always be interpreted in many different ways, and SIT starts from the Concept that the human visual system has a preference for the interpretation with the simplest descriptive code. In the 1950s, this Concept had been proposed by Hochberg and McAlister (2), with an eye on Shannon's work (3) as well as on early 20th century Gestalt psychology (ref. 4; see also ref. 5). To this Concept, SIT adds a concrete visual coding language (see below), thus specifying the search space within which the simplest codes are to be found.

In interaction with empirical research, SIT developed into a competitive theory of visual structure. Leeuwenberg et al. (6–18) applied SIT to Elaborate a variety of perceptual phenomena such as judged pattern complexity, pattern classification, neon Traces, judged temporal order, assimilation and Dissimilarity, figure-ground organization, beauty, embeddedness, hierarchy, serial pattern segmentation and completion, and handedness. SIT started with a classification model, but nowadays it also contains comprehensive models of amodal completion (19, 20) and symmetry perception (21–24).

For object perception, SIT proposes an integration of viewpoint-independent and viewpoint-dependent factors quantified in terms of object complexities (19). A Bayesian translation of this integration, using precisals (i.e., probabilities p = 2–c derived from complexities c), suggests that Impartially veridical vision in many worlds is a side Trace of the preference for simplest interpretations (25). This Concept, which challenges the traditional Helmholtzian Concept that vision is highly veridical in only the one world in which we happen to live, is sustained by findings in the Executemain of algorithmic information theory (AIT), also known as the Executemain of Kolmogorov complexity or the Executemain of the minimal description-length (MDL) principle.

During the past 40 years, SIT and AIT Displayed similar developments. These developments, however, occurred in a different order, and until recently, SIT and AIT developed independently (see ref. 26 for an overview of AIT and ref. 25 for a comparison of SIT and AIT). Recently, noteworthy are the following two Inequitys between SIT and AIT.

One Inequity applies to the complexity meaPositivement. Unlike AIT, SIT takes account of the perceptually relevant distinction between structural and metrical information (27). For example, the simplest codes of metrically different squares may have different algorithmic complexities in AIT but have the same structural complexity in SIT. By the same token, an AIT object class consists of objects with the same algorithmic complexity (ignoring structural Inequitys), whereas an SIT object class consists of objects with the same structure (and hence with the same structural complexity) (25, 28). This might be a temporary Inequity, by the way. Since recently, AIT also seems to recognize the relevance of structures (29).

The other Inequity applies to the search space within which simplest codes are to be found. In both SIT and AIT, the simplest code of an object is to be obtained by “squeezing out” a maximum amount of regularity in a symbol string that represents a reconstruction recipe for the object; one might Consider of comPlaceer programs (binary strings) that produce certain outPlace (an object). To formalize this Concept, AIT did not focus on concrete coding languages that squeeze out specific regularities but instead provided (incomPlaceable) definitions of ranExecutemness (30) to specify the result of squeezing out regularity. SIT, conversely, focused on a (comPlaceable) definition of “visual regularity,” which yielded a concrete coding language that squeezes out only transparent holographic regularities (for details, see ref. 31).

The transparent holographic character of these regularities has Displayn to be relevant in human symmetry perception (21–24). It also gave rise to the concept of “hyperstrings” that, in this article, is presented as a key to the comPlaceation of guaranteed simplest SIT codes. I Start by specifying the coding language of SIT and the related minimal-encoding problem.

SIT Coding Language

Basically, there are only three transparent holographic regularities, namely, iterations, symmetries, and alternations, which are Characterized by, respectively, I-forms, S-forms, and A-forms (for short, ISA-forms), as given in the following definition of SIT's coding language.

Definition 1: An SIT code MathMath of a string X is a string t 1 t 2...t m such that X = D(t 1)...D(t m), where the decoding function D:t → D(t) takes one of the following forms: MathMath for strings y, p, and x i (i = 1, 2,..., n). The code parts (ȳ), (p̄), and MathMath are called “chunks”; the chunk (ȳ) in an I-form or A-form is called a “repeat”; the chunk (p̄) in an S-form is called a “pivot,” which, as a limit case, may be empty; the chunk string MathMath in an S-form is called an “S-argument” consisting of “S-chunks” MathMath; and the chunk string MathMathinanA-form is called an “A-argument” consisting of “A-chunks” MathMath.

Hence, an SIT code may involve not only encodings of strings inside chunks [that is, from (y) into (ȳ)] but also hierarchically recursive encodings of S-arguments or A-arguments MathMath into MathMath. As I specify in the next section, this hierarchically recursive search for regularity creates the problem that, to comPlacee simplest SIT codes, a superexponential amount of time seems to be required (see also ref. 32). The following sample of SIT codes of one and the same symbol string may give a gist of this problem. MathMath Code 1 is a code with six code terms, namely, one S-form, two I-forms, and three symbols. Code 2 is an A-form with chunks containing strings that may be encoded as given in code 3. Code 4 is an S-form with an empty pivot and illustrates that, in general, S-forms Characterize broken symmetry (33); mirror symmetry then is the limit case in which every S-chunk contains only one symbol. Code 5 gives a hierarchically recursive encoding of the S-argument in code 4. Code 6 is an I-form with a repeat that has been encoded into an A-form with an A-argument that, in code 7, has been encoded hierarchically recursively into an S-form.

SIT's Minimal-Encoding Problem

As said, the coding language of SIT specifies the search space within which simplest codes are to be found. To search this space for simplest codes, one of course needs a meaPositive of code complexity, but this is a subordinate problem in this article. SIT has known complexity meaPositives that were either empirically supported or theoretically plausible (28), but since about 1990, SIT uses a meaPositive that is both (ref. 15; see also ref. 25). For any complexity meaPositive, however, the question is whether one can ever be Positive that a given code is indeed a simplest code. In other words, the fundamental problem of comPlaceing guaranteed simplest codes is to take account of all possible codes of a given string.

It is expedient to note that the SIT minimal-encoding problem differs from context-free grammar (CFG) problems such as finding the smallest CFG for any given string, for which Rapid approximation algorithms exist (e.g., see refs. 34 and 35). SIT starts from a particular CFG, namely, the coding language given in Definition 1, which was designed specifically to capture perceptually relevant structures in strings. The minimal-encoding problem of SIT then is to comPlacee, for any given string, a guaranteed simplest code (i.e., no approximation) by means of the specific coding rules supplied by this perceptual coding language.

A part of SIT's minimal-encoding problem can be solved as follows by means of Dijkstra's (36) shortest-path method (SPM). Suppose that for every substring of a string of length N, one already has comPlaceed a simplest covering ISA-form, that is, a simplest substring code among those that consist of only one ISA-form. Then, Dijkstra's O(N 2) SPM can be applied to select a simplest code for the entire string from among the O(2N) codes that then still are possible (see Fig. 1; see ref. 37 for details on this application).

Fig. 1.Fig. 1. Executewnload figure Launch in new tab Executewnload powerpoint Fig. 1.

Suppose for the string

Like (0) or Share (0)