SINGAPORE, Sept. 23, 2024 (GLOBE NEWSWIRE) — Arllecta Group approached the initial implementation of the sense-to-sense (S2S) algorithm based on its own mathematical theory Sense Theory specifically designed for the creation of self-identifying AI.
A significant advantage of the S2S architectural solution is the search for meaning both in A4 format data and in ultra-large volumes of cloud storage.
Now we will briefly describe the technological approach used in our solution.
The uniqueness of this algorithm in comparison with the current most advanced GPT models lies in at least four of its characteristic differences.
The first difference is that this algorithm does not require gigabytes of data and billions of normalized parameters to train the algorithm. The architecture of the sense-algorithm has, as a special case, an implementation of a limited Boltzmann machine. However, at the core lies the implementation of the zero-object paradigm as a vector quantity of non-constant length. This approach allows us to move from numerical (statistical) analysis to sense (semantic) analysis, which is extremely important when creating self-learning and adaptive AI. Without the ability to create “new” knowledge, no AI will succeed in self-identification (self-consciousness).
The second difference is that in the implementation of such extremely important AI tasks for the result as clustering and data typing, we use two innovative tools – the Neuro-Amorphic Function (NAF) and Sense Diagrams. NAF allows us to extrapolate many physical phenomena in nature and very well describes the distribution of senses (meanings) in the text received as input. A sense diagram, created by analogy with Bohr’s atomic theory, very accurately describes large groups of sense (semantic) sets made up of input data, as well as which is extremely important, semantic connections between each element of these sets. It is worth noting that the computational time required to construct and analyze such sets is polynomial.
The third difference is that the sense-algorithm allows us to meaningfully connect objects of different nature from a large volume of data. For example, if there are 1,000,000 sets classified according to any criterion and having different origins of their elements, the sense-algorithm allows one to find a semantic connection between any two objects of these sets.
The fourth and perhaps one of the most important differences is that we do not use the transformer architecture at all, since we believe that this architecture has several critical defects that greatly distort the meaning of the processed input text. Two obvious defects are that, firstly, the combinational law for the scalar product of a query vector and a key vector cannot be satisfied in the attention mechanism, secondly, normalizing the values of the output vector of the attention mechanism introduces distortions in the decoder module, since any type of context vector, when normalized, loses the main thing – the semantic connection between its elements.
Now we will briefly describe the AI focus in our solution.
According to the author of the mathematical theory Sense Theory and the creator of 25 main software modules that make up the software core of S2Schat, Egger Mielberg, the current task of our AI is to obtain only one sentence as the result of an answer when analyzing not only a separate small passage from a book or a specialized article, but also a full-fledged one books of 500 sheets or more.
The main cognitive task of our solution is to implement 2 important directions. The first direction is to search and linguistically describe the zero object of the first level as a vector axis that determines the main meaning of the processed text.
The second direction is to search and describe the depth of the semantic connection between zero objects of the second and other levels. For this implementation, we use Sense Derivatives and the Sense entropy value.
The uniqueness of the meaning of sense entropy lies in the possibility of describing the degree of semantic connections between objects of different natures. This feature is missing in traditional mathematics.
And now we’ll tell you a little about the numbers obtained when comparing our S2Schat solution (13 modules out of 25 implemented) with the most advanced GPT solution now in the market.
The numerical values indicated in this article are approximate and may contain inaccuracies due to technical and other reasons.
For our text analysis we used the American Ways book (XX, On Understanding excerpt).
Below are the exact specifications of our solution and approximate GPT’s.
In our algorithmic calculations, we are considering “sense energy”, which has a vector nature and therefore, it will be more accurate to define the Law of Conservation of Sense:
The total sense energy (SE) of any open sense space (OSS) is constant if the conditions of the Mielberg cycle
are satisfied in this sense space.
In other words, to create AI with self-identification, it is extremely important for the implemented algorithm that the meaning (sense) of the analyzed book, article, abstract and other source remains the same.
The law of conservation of sense, like the law of conservation of energy, shows the constancy of the existence of the object of study only in different semantic (energy) forms.
The law of conservation of sense, in contrast to the Turing test, qualitatively determines the degree of “humanity” of digital AI.
Resources:
Sense Theory. Part 1.
https://vixra.org/pdf/1905.0105v1.pdf
S2Schat.
Sense Derivative.
https://www.researchgate.net/publication/344876659_Sense_Derivative
Sense Entropy. The Law of Conservation of Sense.
https://www.researchgate.net/publication/369295558_Sense_Entropy_The_Law_of_Conservation_of_Sense
Media Contact:
Contact persons full name: Egger Mielberg
Email id: [email protected]
Company website: https://www.arllecta.com/
Full company name: Arllecta Pte.Ltd
Disclaimer: This content is provided by the sponsor. The statements, views, and opinions expressed in this column are solely those of the content provider. The information shared in this press release is not a solicitation for investment, nor is it intended as investment, financial, or trading advice. It is strongly recommended that you conduct thorough research and consult with a professional financial advisor before making any investment or trading decisions. Please conduct your own research and invest at your own risk.