Formulated within the Ergo Mentis project, one of the primary implications of the Janus twin universes model is: theoretically, we may receive hints of our future. Let’s re-examine this logic, marking the points "now" and "one year from now" on our timeline:
- The state of our universe at "now" depends – albeit very weakly – on the state of the twin universe at "now," since they collectively warp spacetime via their mutual gravitation. Of course, by "depend," I am referring to coupling/mutual influence rather than a rigid chain of cause and effect.
- The state of the twin universe at "now" is shaped by its evolution along its own arrow of time – that is, it is conditioned by its past, which corresponds to the state at "one year from now" on our scale, representing our future.
- The state of the twin universe in its past – at "one year from now" on our scale – will depend (from our perspective) or already depends (from its perspective) on the state of our universe at that time: they will (from our perspective) collectively warp spacetime, meaning they have already (from the twin universe's perspective) warped it together. Again, by "depend," I am referring to coupling/mutual influence.
- Thus, we see that the state of our universe at "now" indirectly depends on the state of our universe at "one year from now" – in other words, there is a mutual correlation between them. Theoretically, given the right detector, this dependence-correlation can be sensed, and with the right decoder, it can be interpreted. In the book Cogito Man, a hypothesis is proposed and supported that the human brain – in certain highly sensitive individuals – can function as such a detector-decoder.
Now, let’s integrate a large-scale, highly advanced AI system into this logic. Suppose it is trained on multi-year multimodal data reflecting the construction of private residences at various stages – encompassing photos, videos, and textual materials (local news clippings, social media posts, etc.). Further, assume that over these years, some houses drop out of the process before being delivered to the client – destroyed by fire or collapse due to structural errors – while others are successfully completed and commissioned. During training, the system analyzes many such examples. Following the logic above, one could posit that the information it processes regarding each house construction already contains subtle hints of that house's fate – since this future outcome subtly influences how that house was documented in the past. These correlational hints are extremely faint and imperceptible to humans, but AI excels specifically at detecting "blurred" statistical signals across vast datasets, provided they deviate even slightly from the noise floor. This "slight deviation" may be reflected somewhere in the distant decimal places of the model’s weights. Consequently, when such a system receives real-time rather than training data, it might – with some degree of confidence – predict the “destiny” of a given construction project, effectively "sensing" its future...
Theoretically, this is possible. From the AI's perspective, hints from the future are indistinguishable from any other correlations. Unlike a human, an AI lacks complexes and does not fear being labeled a charlatan or a fortune-teller – if a causal link exists, the system will extract and utilize it in its outputs. We should also note that "traces of the future" may be significantly more discernible in textual data than in photos or videos. Articles and posts authored by humans – products of their cognitive work – reflect dynamics within the B Objects, which are isolated from the "biological noise" that likely masks future correlations in physical manipulations (such as the process of capturing video or photography).
Is this “house construction” example a good one? No, it’s flawed – primarily because it is extremely difficult to decouple "signals from the future" from ordinary, well-understood factors that potentially influence the fate of a construction project. During training, the AI system encounters (by seeing, reading, and comparing) direct or indirect data regarding material quality, the execution of construction protocols, site locations, the behavior and remarks of management, and essentially everything occurring in the immediate vicinity. All of this can correlate with the future of a specific project without any cosmological tricks. Such an example wouldn't convince anyone; I mention it only as a contrast to experiments that might actually prove persuasive.
Let’s consider the first of these – one deliberately restricted to the "cognitive" channel of "future hints" (i.e., that which engages the B Objects). Suppose we recruit a large pool of participants, each assigned a task: without any predefined theme or objective, they must write four sentences – two about something red and two about something blue – such that the adjectives "red" and "blue" appear exactly three times each in the text. Under these constraints, the resulting writings will be "color-symmetric" – creating no obvious structural or statistical advantage to "red texts" over "blue texts," or vice versa. One could further tighten these symmetry constraints, but for simplicity, we will stick with them.
As soon as the text is written, it is stored in a database and never altered. All entries are thoroughly shuffled, and once a day, a simple script – written in advance, prior to the start of the experiment – randomly selects records using a digital identifier unrelated to the text's content. It targets those entered into the database exactly 30 days earlier and, again, randomly (without "peeking" at the text itself, the author's name, the filename, etc.), assigns each record a label of either 1 ("red") or 0 ("blue"). Multiple random number generators could be combined here to ensure the strongest possible "causality break". Thus, each text receives one of two digital labels – meaning "About Red" or "About Blue" – which are entirely causally independent of the text's creation and cannot be linked to its tone, style, subject matter, author, and so forth.
Then this dataset is used to train an AI system that "sees" both the texts and their assigned labels. At a certain point, training ceases, the database is cleared, the system's weights are frozen, and it begins predicting the future labels of newly submitted texts immediately after those texts are generated. The predictions are stored separately from the texts themselves – the script assigning the labels has no access to them. Finally, once a sufficient volume of predicted results has been accumulated, they are compared against the actual labels generated by the script for those new texts.
What might we learn from this? If the AI system fails to capture any "hints from the future" – either because they do not exist or because the training was insufficient – then its predictions should be purely random; future labels are independent of the texts themselves under conventional causality. However, if the system demonstrates a stable (albeit slight) margin above random chance, we would be justified in assuming that it has identified the sought-after correlations between a future event (the assignment of a "red" or "blue" label) and the text-generation process that occurred 30 days prior to that event. Something would subconsciously nudge the authors toward specific tonal nuances and stylistic patterns, subtly biasing their texts – according to markers imperceptible to us, yet accessible to a trained AI system – in one direction or the other. This would be an exceptionally promising result!
The "red/blue" experiment ties the potential influence of the future to a cognitive process (the writing of text) – that is, to the activity of the human mind and, hence, to the hypothetical B Objects. Let us now consider another example that, by contrast, avoids such a link as much as possible. We will attempt to exclude the human element from the logical chain of "action in the present –> result in the future –> its backward influence," constructing it exclusively from "non-intelligent" components.
Suppose that in a biolab, a large number of identical cell cultures are grown under uniform conditions. For the first six days, fully automated detectors record hourly data streams for each sample: micrographs, temperature, pH, fluorescence levels, and so on. This data is not shown to any human, nor is it interpreted or aggregated in any way. On the seventh day, a random number generator assigns each sample a group label: "stress" or "control." Once the group distribution is finalized, an autonomous robot injects a harmful chemical into the "stress" samples, killing the cells or suppressing their growth.
Thus, without the slightest human intervention, a dataset is generated: "(1) detector data – (2) sample fate," where conventional causality between (1) and (2) is strictly excluded: the "stress/control" labels are created only after the data collection is finalized. An AI system is trained on this dataset and then issues predictions regarding the outcomes of new samples based on their sensory data from the first six days. If the predictive statistics prove even marginally superior to random chance, one could infer a potential influence of future occurrences on present events.
Of course, neither the "cognitive" nor the "robotic" experiments constitutes rigorous mathematical proof of "retro-causality." However, should they yield positive results, it would warrant serious reflection – and the allocation of resources for further research.
The most compelling scenario would be a substantial discrepancy where "cognitive reverse-causality" (the "red/blue" experiment) significantly outperforms the "robotic" version (the "stress/control" experiment). This would provide grounds to suggest not only that the future influences the present (which is reflected in the weights of an advanced AI), but that this influence becomes more pronounced through the involvement of the human mind. This, in turn, would serve as a robust argument for the hypothesis of B Objects – "external" components of intelligence existing outside of biological "noise," and thus acting as amplifiers for the "faint signals of the universe." One can only imagine what horizons this would open.
Speaking of horizons – particularly in the "applied" sense – is it possible to derive actual benefits from "predicting the future" via AI, should experiments prove it feasible to some extent? Does this not trigger a causal conflict? The short answer is: no conflict arises, but the advantage in most domains will be short-lived.
Let’s consider the example that immediately comes to mind: financial markets. Suppose that, as a result of our "red/blue" experiment, we have created an AI system capable of detecting statistically significant signals from the future within "cognitive products." We now want to use the same approach to train a model on human-authored texts (news, social media commentary, analytical reviews, etc.) related to stock market behavior. Within these, it must "capture" hints regarding, for instance, the direction in which the S&P 500 index will move in two weeks.
Could this work? Yes, quite possibly – though training such an AI from scratch on market texts is hopeless: subtle retro-causal correlations will be drowned out by market noise and conventional factors (interest rates, inflation, corporate earnings, etc.). However, if one first trains a "base" predictor model on "clean" data where standard causality is intentionally severed (as in the red/blue experiment) and then fine-tunes it on market texts, it may yield results. This approach – transferring specialized "sensitivity" to other tasks – is frequently used, for instance, in AI systems for audio and video recognition. Clearly, even if successful, the "foretelling" effect would likely be faint, yet even a slight statistical edge is theoretically sufficient to generate alpha. In practice, however, things are far more complex. To actually turn that into profit, at least two conditions must be met: small-scale positions and the "uniqueness" of our "predictive signal."
The first point is straightforward: if one attempts to invest on a scale large enough to affect prices, any advantage is rapidly priced in and diluted, neutralizing potential gains. The market – with its legion of sophisticated algorithmic participants – instantaneously erodes all alpha signals it can detect, including the micro-movements in price. Setting aside blatant insider trading (or narrow niche markets), the winners are not those who "know," but those who "execute": those who are first in, first out, and pay minimal commissions – essentially, the institutional giants. One can only "play" against them by remaining below their radar.
Signal uniqueness is a slightly more complex matter. Any public data – including the texts we intend to use – is analyzed by other market participants with the utmost rigor. If the correlation identified by our system is even marginally discernible, it will be detected and integrated into numerous strategies – even if it isn't labeled "retro-causality." Markets are indifferent to the philosophical or physical nature of a "hint"; if it can be technically exploited, competition will rapidly nullify it. However, it is possible that, through extensive engineering, we have developed a highly specific technology capable of uncovering a very non-trivial dependency – one that others might perceive as "noise" or complete "silence" (averaged out by multidirectional background spikes). Many trading algorithms deliberately suppress ultra-weak and unstable signals to avoid overfitting – therefore, our system might allow us to "see" what others either miss or deem irrelevant.
In this latter case, we could convert our unique statistical advantage into consistent profit – provided we operate modestly and avoid large-scale positions. However, this is not a permanent state: others will eventually discover our "deeply embedded" correlation – again, without even considering the interaction of twin universes – and it will become "common knowledge," widely known to many.
Perhaps the inevitable can be deferred by moving into less crowded niches, where signals are not devalued every minute by billions of competing dollars and the most sophisticated algorithms. Sports betting comes to mind – specifically "secondary" markets like esports (Counter-Strike, Valorant, etc.). Yet even there, the "modest stakes" condition remains – due to regulations, the close monitoring of frequent large wins, and so on – although, unlike financial markets, the bets themselves do not (directly) influence the ultimate outcome of the competition.
And now, a few words on the causal conflict of the "self-invalidating prediction" kind – the notion that "knowledge of the future altered the future." For instance, we receive a signal of a market rally in two weeks and start buying; others follow suit; then, seeing this, a major institutional player decides to take profits, causing the market to fall by the target date instead of rising. In reality, as noted above, no such conflict exists. A market forecast doesn't invalidate itself; rather, it often fails simply because of its inherent "instability" and "fragility." A "hint" of future events generated by a trained AI – or a highly sensitive "seer" – is not a rigid "prediction of fact," but merely a marginal probability shift. The possibility of an opposing outcome by no means drops to zero. On the specific day we received the signal, it could very well have been erroneous – yet four days later, after we and other market participants had acted, the AI system might offer a completely different, and this time accurate, assessment. The initial conditions for its forecast have shifted, and perhaps the correlation has even become slightly more distinct – possibly, in part, thanks to our actions four days prior.
In the “Cogito Man” novel, as well as in another section of this site, I discussed at length the relationship between current and future events from the perspectives of both the twin universes model and superdeterminism. I even introduced the concept of the "vector of causality" specifically to describe this "probabilistic pressure" toward the future acting in the present. The financial market can be viewed as a massive accelerator of this vector’s volatility – everyone is simply jostling too aggressively in their scramble for money...
All in all, in my view, zero-sum market games are not the optimal use case for retro-causal AI. A "hint from the future" is likely an exceptionally weak signal, making it ill-suited for high-stakes competitive struggle. It’s more appropriate to apply it where competition is absent and where minor statistical variances can lead to significant consequences in a branching labyrinth of possibilities. A single word, an intonation, or a subtle shift in meaning can shift us onto a different "branch" of events, markedly altering the entire developmental trajectory.
To illustrate this, let’s consider a self-disciplinary practice – something like the "pursuit of one’s best self." Suppose that on a daily, weekly, or monthly basis, you formulate written self-directives (plans, self-promises, commitments, or messages to loved ones) that define various aspects of your life in the near future – and you then make a concerted effort to follow them. Slight variations in tone, style, or phrasing can substantially impact your overall personal state of mind and, consequently, alter your perception of events and your subsequent reactions. This is a fundamentally non-linear scenario: small deviations in initial data – accumulating and layering upon one another – rapidly lead to significant divergence in outcomes. Therefore, even very faint "hints" that modify something within your written directives could make a noticeable, even destiny-shaping difference.
How might this be implemented in practice? Likely, as with financial markets, one would begin with a base "retro-causal" model trained on "clean" data devoid of standard causality (along the lines of the "red/blue" experiment). Then it would need to be "personalized" – fine-tuned on a vast, highly individualized info-corpus containing, on one hand, texts, notes, and letters, and on the other, formal descriptions and evaluations of the succeeding life circumstances, situations, and actions: everything that describes "step-by-step" personal development. Clearly, such formalization – and the creation of such a dataset in general – is an extraordinarily complex task; however, we are setting aside engineering difficulties to consider the theoretical concept. Suppose that over some foreseeable timeframe, such a dataset could be compiled and used to train an AI system in which a general "sensitivity to future hints" correlates with the products of a particular mind and the events following their creation.
Such a system could be used as a personal assistant: suppose you create drafts of your self-directives, and the system offers a choice of several polished versions for each, differing in microscopic "nuances of meaning" linked to the probabilities of "your behavioral characteristics in the near future" (overall mental tone / stress response types / persistence in defending your position / focus / distractibility – and so on). You select the versions that resonate with you and adopt them as behavioral "settings," which you then strive to implement as circumstances permit – returning to them, rereading them, and adjusting your actions accordingly.
Of course, it is difficult to imagine having the patience to follow such a practice constantly. Real life does not work that way: we get tired, we forget, we become lazy, or we are distracted; sometimes we simply aren't in the mood for "proper self-directives." However, the essence of such a retro-causal effect lies in its cumulativeness: weak probabilistic shifts, even if they only trigger occasionally, can substantially alter your life over months and years. Even the episodic assistance of such an aide – one that introduces a barely perceptible yet systematic "alignment with the future" – can push you toward the right pivotal decisions more frequently than the advice of a "standard" AI model functioning merely as a competent writing style editor.
Is this realistic? Why not. If the Janus theory is correct – which I personally find quite believable – then we exist within distributions of energy and matter formed through mutual influence with their future distributions. If we further assume that some version of the B Objects hypothesis is also true, then it is entirely plausible that the "parameters" of the thought-products (semantic emphases, word choice, intonation, specific stylistic features) correlate with the future far more noticeably than the physical properties of biological bodies. The physics of the body is noisy and saturated with randomness, whereas thinking "transforms" its chaos into much more structured spatio-temporal forms. Within these forms, the most subtle retro-causal dependencies can manifest with far greater clarity. And a trained AI is potentially capable of detecting them better than the vast majority of humans; that is its primary power – recognizing the weakest probabilistic signals across vast numbers of examples.
The example of a personal assistant might again raise questions about causal interaction with the future, such as: "If my future is sending me signals, does that mean it is already determined – regardless of the path I take? Do my efforts change nothing? Why would I even need a retro-causal advisor?" The answer remains the same: the presence of a "hint" (a weak probabilistic shift) does not imply a rigidly fixed (100% probable) finale. Some "branches" of events are slightly more consistent with the current configuration of "boundary conditions" than others. Moving along them can be viewed as a more comfortable path toward where one is most likely to end up – and the "comfort" of this path is determined, in part, by the decisions we make now and the actions we take. The labyrinth of events-experiences-meanings is forged from both sides of time!
This final point is fundamentally important. A "signal from the future" is not something external or separate from us: it already accounts for the fact that we might take action – choosing different intonations, rephrasing thoughts, resisting an impulsive decision, or returning to a forgotten promise. A retro-causal assistant "sensing" this signal is not an attempt to "cheat the future," but rather a tool that, by altering the micro-dynamics of our current actions, helps us participate more harmoniously in the approach of the future that is most probable from the standpoint of our specific present. Moreover, this "harmoniousness" may be a very practical, rather than abstract, concept. I have consistently emphasized the interplay between mind and physiology that defines our "Self" – and it is possible that a "comfortable" path is not a metaphor, but a very concrete characteristic of one’s sense of being in reality, including physical well-being.
As a brief reminder, the themes of predestination, causality, and the meaning of all our efforts and aspirations are discussed in detail in another section of the site. I will not repeat it here; I will only ask myself – as if in jest – am I writing all this while unwittingly following a "hint from the future," suggested to me through the external "image" of my mind, my B Object? From a future where AGI systems have learned to decode the faint glimmers on the windows of a train car racing through the labyrinth of events-experiences-meanings already shaped by our two universes?... I have no answer – as I am not the protagonist of the novel "Cogito Man".
And so, we end here.