Next scheduled rescrape ... never
Version 1
Last scraped
Edited on 26/06/2025, 02:18:50 UTC
That's unfortunate to see seoincorporation did this.

I have a habit of inviting users in DM to join my campaigns. I find users based on my needs, targeted boards etc. seoincorporation is one of them.
I can confirm this, LM send me the proposal to join the campaign, and the users on his campaigns are carefully selected users.

What surprised me the most is that he barely posted more than 20 posts per week. That's why when he sent me a DM regarding this issue, I trusted his words. It makes sense if you ask me. If he were a sig spammer, why would he post fewer than 20 posts per week despite having the option to get paid for 35 posts? BTW, he didn't tell me this. This is my observation because I remember I always wanted him to post more, but have always been disappointed by his numbers lol.

From now, seoincorporation isn't a part of the campaign until he can be able to remove the neutral tag from his profile.
And here you have a good point, i really wish to be able to post more in your campaign but my full time job really absorve my time, working from 9am to 7pm from Monday to Saturday is complex, even those 20 post a month that i used to do wasn't easy but i did my best to stay active on the forum. Even with AI i wasn't able to fill the 35 posts because it wasn't full auto i used to read the post and to verify the answer, so i was conscious about the threads. But the income from this campaigns didn't make any difference to me, i used that money to gamble most of times, so, isn't a big deal for me to lose the signature.

I know there would be consequences from my actions, and I'm not even angry with LoyceV, in fact i admire him for all the contributions he/she has made to the forum, so, no hard feelings and i respect his decision. Even if his neutral trust has a lot of hate there is not problems.

Quote
This signature spammer posted chatbot verbal diarrhea AKA plagiarism and claims "it was an experiment". Don't waste your time interacting with it.
Second link: https://bitcointalk.org/index.php?topic=5546497.msg65474764#msg65474764

If any of you think the same way i hardly recommend to use the ignore button that you can find on any of my posts.

I'm still thinking about ways to fight against AI in the forum and feeling kind of stuck but maybe working with the community we could make the right tool for it. My idea is the next one:

We have the patrol tool: https://bitcointalk.org/index.php?action=recent;patrol

So, we could make an script that parce that page and send the post to an IA agent where the post could get cataloged like "Legit, Random, Spam, Scam, IA..." And that way we could manually verify each list, that process could be full auto with a delay on the call of patrol to avoild getting IP banned.

But the problem is that patrol only show new posts, and if we want to monitorate the full forum we should filter the replies on the threads too, which is complex because the RSS is blocked and if we abuse the forum calls to verify the different posts we will get blocked by the server, so, if someone has a way to get all the new posts on all the forum that would help to build the right patrol tool.

The IA agent could include the ZeroGPT API and other ones to filter the posts, and GPT to identify scams.



But the problem is that patrol only show new posts, and if we want to monitorate the full forum we should filter the replies on the threads too, which is complex because the RSS is blocked and if we abuse the forum calls to verify the different posts we will get blocked by the server, so, if someone has a way to get all the new posts on all the forum that would help to build the right patrol tool.

As linus torvalds say... Talk is cheap, show me the code.

Code:
from bs4 import BeautifulSoup
import json
import sys
import requests
import re

def extract_after(text, key):
    try:
        return text.split(key)[1].split()[0]
    except:
        return None

def parse_quote_header(header_text):
    match = re.search(r"Quote from:\s*(.+?)\s+on\s+(.*)", header_text)
    if match:
        return match.group(1).strip(), match.group(2).strip()
    return None, None

def extract_user_profiles(soup):
    profiles = {}
    for td in soup.find_all("td", class_="poster_info"):
        a = td.find("a")
        if a:
            name = a.text.strip()
            href = a.get("href")
            profiles[name] = href
    return profiles

def extract_quotes_recursive(container, user_profiles):
    quotes = []
    headers = container.find_all("div", class_="quoteheader", recursive=False)

    for header in headers:
        quote = {}
        link_tag = header.find("a")
        quote["link"] = link_tag["href"] if link_tag else None
        user, date = parse_quote_header(header.get_text(strip=True))

        quote["author"] = user
        quote["profile_url"] = user_profiles.get(user, None)
        quote["date"] = date

        quote_block = header.find_next_sibling("div", class_="quote")
        if quote_block:
            quote["quotes"] = extract_quotes_recursive(quote_block, user_profiles)
            for q in quote_block.find_all("div", class_="quote", recursive=False):
                q.decompose()
            quote["content"] = quote_block.get_text(strip=True)
            quote_block.decompose()
        else:
            quote["quotes"] = []
            quote["content"] = ""

        header.decompose()
        quotes.append(quote)

    return quotes

def parse_html_posts(html_content):
    soup = BeautifulSoup(html_content, "html.parser")
    post_blocks = soup.find_all("td", class_="msgcl1")
    user_profiles = extract_user_profiles(soup)
    posts_data = []

    for block in post_blocks:
        post = {}
        anchor = block.find("a")
        post["message_id"] = anchor.get("name") if anchor else None

        poster_td = block.find("td", class_="poster_info")
        if poster_td:
            user_link = poster_td.find("a")
            post["author"] = user_link.text.strip() if user_link else None
            post["profile_url"] = user_link["href"] if user_link else None

            activity_text = poster_td.get_text()
            post["activity"] = extract_after(activity_text, "Activity:")
            post["merit"] = extract_after(activity_text, "Merit:")

        subject_div = block.find("div", class_="subject")
        post["title"] = subject_div.get_text(strip=True) if subject_div else None

        date_div = subject_div.find_next_sibling("div") if subject_div else None
        post["date"] = date_div.get_text(strip=True) if date_div else None

        post_div = block.find("div", class_="post")
        if post_div:
            post["quotes"] = extract_quotes_recursive(post_div, user_profiles)
            post["content"] = post_div.get_text(strip=True)

        posts_data.append(post)

    return posts_data

def main():
    if len(sys.argv) < 2:
        print("Usage: python3 post_last.py <URL> [output.json]")
        sys.exit(1)

    url = sys.argv[1]
    output_path = sys.argv[2] if len(sys.argv) > 2 else "bitcointalk_parsed.json"

    try:
        headers = {"User-Agent": "Mozilla/5.0"}
        response = requests.get(url, headers=headers)
        response.raise_for_status()

        posts_json = parse_html_posts(response.text)
        with open(output_path, "w", encoding="utf-8") as outfile:
            json.dump(posts_json, outfile, indent=2, ensure_ascii=False)

        print(f"Success! Saved to {output_path}")

    except requests.RequestException as e:
        print(f"Error fetching URL: {e}")
        sys.exit(1)

if __name__ == "__main__":
    main()

Output:

https://privatebin.net/?b23d0b444e13d295#5xcuFNDVwzcPdZBjaiJtZoqaijrxUsstVM1G98WycE8z

Run it:

Code:
python3 post_last.py https://bitcointalk.org/index.php?topic=5546497.0 out.json

With this code we can directly parsing any thread link from the forum and get a JSON of the posts and quotes, that would be a nice input for an automated process to filter the content. A tool like this could be a nice base to patrol the forum, with another script we could get the updated threads directly from the board of our interest, then create all the JSONs and feed the AI agent.
Original archived Re: My AI experiment on the forum
Scraped on 19/06/2025, 02:18:51 UTC
That's unfortunate to see seoincorporation did this.

I have a habit of inviting users in DM to join my campaigns. I find users based on my needs, targeted boards etc. seoincorporation is one of them.
I can confirm this, LM send me the proposal to join the campaign, and the users on his campaigns are carefully selected users.

What surprised me the most is that he barely posted more than 20 posts per week. That's why when he sent me a DM regarding this issue, I trusted his words. It makes sense if you ask me. If he were a sig spammer, why would he post fewer than 20 posts per week despite having the option to get paid for 35 posts? BTW, he didn't tell me this. This is my observation because I remember I always wanted him to post more, but have always been disappointed by his numbers lol.

From now, seoincorporation isn't a part of the campaign until he can be able to remove the neutral tag from his profile.
And here you have a good point, i really wish to be able to post more in your campaign but my full time job really absorve my time, working from 9am to 7pm from Monday to Saturday is complex, even those 20 post a month that i used to do wasn't easy but i did my best to stay active on the forum. Even with AI i wasn't able to fill the 35 posts because it wasn't full auto i used to read the post and to verify the answer, so i was conscious about the threads. But the income from this campaigns didn't make any difference to me, i used that money to gamble most of times, so, isn't a big deal for me to lose the signature.

I know there would be consequences from my actions, and I'm not even angry with LoyceV, in fact i admire him for all the contributions he/she has made to the forum, so, no hard feelings and i respect his decision. Even if his neutral trust has a lot of hate there is not problems.

Quote
This signature spammer posted chatbot verbal diarrhea AKA plagiarism and claims "it was an experiment". Don't waste your time interacting with it.
Second link: https://bitcointalk.org/index.php?topic=5546497.msg65474764#msg65474764

If any of you think the same way i hardly recommend to use the ignore button that you can find on any of my posts.

I'm still thinking about ways to fight against AI in the forum and feeling kind of stuck but maybe working with the community we could make the right tool for it. My idea is the next one:

We have the patrol tool: https://bitcointalk.org/index.php?action=recent;patrol

So, we could make an script that parce that page and send the post to an IA agent where the post could get cataloged like "Legit, Random, Spam, Scam, IA..." And that way we could manually verify each list, that process could be full auto with a delay on the call of patrol to avoild getting IP banned.

But the problem is that patrol only show new posts, and if we want to monitorate the full forum we should filter the replies on the threads too, which is complex because the RSS is blocked and if we abuse the forum calls to verify the different posts we will get blocked by the server, so, if someone has a way to get all the new posts on all the forum that would help to build the right patrol tool.

The IA agent could include the ZeroGPT API and other ones to filter the posts, and GPT to identify scams.