Post
Topic
Board Economics
Re: Martin Armstrong Discussion
by
trulycoined
on 23/07/2019, 10:40:28 UTC
We are talking about the back-end that has been running for decades. Yes. Ported from Assembler code.

In my understand it's not the backend. Since MA doesn't want the backend to be connected with the internet, the data must be exported at least once to some other system and probably distributed or mirrored from there to one or several web servers. I assume that this is where the discrepancies and out of sync issues happen.

And yet:

https://www.armstrongeconomics.com/institutional-time-share/
You can put your entire portfolio into the system and it will monitor everything far more intensely than any human staff are even capable of accomplishing and without the risk of pure opinion.

That to me sounds like an internet-connected system, where Socrates is hosted on IBM Sequoia supercomputer... Which was commissioned by the US government initially for nuclear weapons simulations, and eventually opened up to academic researchers:

https://www.computerworld.com/article/2530771/ibm-to-build-massive-supercomputer-for-u-s--government.html
https://phys.org/news/2013-05-simulation-sequoia-supercomputer.html

We can safely debunk that one, not ignoring the fact MA is deeply suspicious and critical of government, so why would he be using their equipment to host his messianic Socrates source code? Also, ML/AI requires vast volumes of data and ideally in real time, especially if it is making predictions. So any device not connected to the internet would have delays in its access to data, and trying to time-stamp manually uploaded data would turn messy and be hugely laborious. It also means the machine cannot run in real-time, it would be constantly on the back foot and useful only for longer term predictions, if it could predict anything at all.

Further, I have previously explained, even a rudimentary pre-school understanding of physics would make one realise MA must be lying about something. To have a supercomputer - as MA alleges (whether IBM Sequuoia or not) - that is being fed vast volumes of data to predict future socioeconomic flashpoints "to the day" and market moves across all asset classes, would require VAST volumes of energy.

As one of the above posted articles explains about IBM Sequoia:
The supercomputer is also helping to drive a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all its computing systems from 12.5 megawatts to 30 megawatts. To achieve the upgrade, it will run more power lines to its facility. Sequoia alone is expected to use about 6 megawatts, according to Seager.

So where exactly is MA sourcing this energy and how is he funding it? Perhaps that perpetual energy device he mentioned in his travels to Japan is in fact in his basement... Running a dynamo keeping Socrates energy-hungry guts filled ad astra.

The IBM Sequoia beard for what is potentially a DOS-based system running on a basic server somewhere is ideal for his typical audience (non-techy, ageing so won't understand nor scrutinise), or his international audience like in Japan/China where things are more likely to be lost in translation and nor will it be easy for them to scrutinise his invented claims where 99% of his content is in English.