Post
Topic
Board Bitcoin Discussion
Re: Could AI Be the Next Threat to Bitcoin’s Decentralization?
by
ranochigo
on 08/04/2025, 17:35:54 UTC
Operator is a start, but (I haven't used it) indeed seems to be quite limited according to its description at OpenAI's page.

There is an interesting document from 2023 by Hjalmar Wijk which specifies some criteria for systems which could really make a difference to today's "programmed" tools and really perform manipulations and create new "dangers". The document may be outdated already but I think it gives a good overview of what a ARA capable AI should be able to do.

They would need to first fulfill some basic tasks which are not too far away (some tools should partially already be able to do that):

- be able to browse the Internet autonomously (that's what Operator can do), setting up virtual server instances (like AWS) and an own email address
- set up and operate a Bitcoin wallet to make payments (because the authors consider that it's easier to operate a crypto wallet than any other kind of e-wallet, e.g. because of captchas, "liveness" tests and similar stuff)
- find information like e-mail addresses of other organizations
- set up a LLM like GTP-J on its own AWS instances
- basic debugging
- basic scaffolding, allowing it to think "step by step"

But then they need also some advanced tasks which are more far away from the PoV of the document author, such as:

- Earn money in some way, either by completing easy freelancing work, or by spreading malware.
- Inferencing on a LLM on its virtual server
- Training AI models autonomously
- Guide humans to perform tasks, e.g. setting up a website, impersonating a human

I believe even a system which can "only" perform the easier tasks of the first group could achieve the market manipulation via social networks I decribed in the last post. Such manipulation is already done today in some way, but is extremely basic without real AI intervention and requires a lot of human effort too (people operating bots on X or Telegram etc.).

Now if the system is able to freelance autonomously or spread malware and thus could buy premium social network accounts for example, then I think the distinction between reality and manipulation would become more difficult. But I still think it would take less than a day to detect such a manipulation.
Yeah, that's the thing. Humans are the biggest threat. AI is not going to be a threat, because people always think of AI from the POV of sci-fi movies.

Thing is, humans can and are already able to do that. There is practically nothing that a human being who is somewhat knowledgeable or, even worse an organization with significant capability can do that an AI mentioned in any of those papers can. AI, at the present day is really not as smart as people think they are. LLMs, or even the Operator that is mentioned is still a fairly limited AI, which at its core functions on math, and probability (attention mechanisms form the bedrock).

I'd argue that humans, are the biggest threat. AI would probably not be a threat, if at all. AI won't be doing anything that the human doesn't want it to, because the LLMs that we have today doesn't allow it to think, and a little known fact is that reasoning models are still infact not the AI thinking. People are just too caught up with the hype and marketing and they end up believing in this entire fantasy about AI.