Post
Topic
Board Bitcoin Discussion
Re: Could AI Be the Next Threat to Bitcoin’s Decentralization?
by
d5000
on 08/04/2025, 17:25:35 UTC
I think OpenAI's Operator is remotely similar to it, but I find it too different and lacking to be considered agentic.
Operator is a start, but (I haven't used it) indeed seems to be quite limited according to its description at OpenAI's page.

There is an interesting document from 2023 by Hjalmar Wijk which specifies some criteria for systems which could really make a difference to today's "programmed" tools and really perform manipulations and create new "dangers". The document may be outdated already but I think it gives a good overview of what a ARA capable AI should be able to do.

They would need to first fulfill some basic tasks which are not too far away (some tools should partially already be able to do that):

- be able to browse the Internet autonomously (that's what Operator can do), setting up virtual server instances (like AWS) and an own email address
- set up and operate a Bitcoin wallet to make payments (because the authors consider that it's easier to operate a crypto wallet than any other kind of e-wallet, e.g. because of captchas, "liveness" tests and similar stuff)
- find information like e-mail addresses of other organizations
- set up a LLM like GTP-J on its own AWS instances
- basic debugging
- basic scaffolding, allowing it to think "step by step"

But then they need also some advanced tasks which are more far away from the PoV of the document author, such as:

- Earn money in some way, either by completing easy freelancing work, or by spreading malware.
- Inferencing on a LLM on its virtual server
- Training AI models autonomously
- Guide humans to perform tasks, e.g. setting up a website, impersonating a human

I believe even a system which can "only" perform the easier tasks of the first group could achieve the market manipulation via social networks I decribed in the last post. Such manipulation is already done today in some way, but is extremely basic without real AI intervention and requires a lot of human effort too (people operating bots on X or Telegram etc.).

Now if the system is able to freelance autonomously or spread malware and thus could buy premium social network accounts for example, then I think the distinction between reality and manipulation would become more difficult. But I still think it would take less than a day to detect such a manipulation.