- procmail dependency eliminated!
- Errored sql queries will retry until they succeed (without locking mechanism)
Shouldn't it retry the query only if it gets the "database is locked" error? I mean, if a more serious error occurred, it would just keep pounding the database in an endless loop, would it not?
I'm still curious what is causing the locking issues in the first place. It really shouldn't be an issue for low traffic database stuff such as this.
Newer committed code retries queries 25 times before considering them dead. Detailed logging happens on failure showing the exact nature of the failure.
Regarding low traffic database stuff - I hardly consider this low traffic. And in fact, I just did a test to see just what the query rate is:
with these changes...
Index: sql_ops.sh
===================================================================
--- sql_ops.sh (revision 407)
+++ sql_ops.sh (working copy)
@@ -33,6 +33,7 @@
echo "$res"
fi
+ echo "$Q" >> ~/test.txt
}
Index: smartcoin_status.sh
===================================================================
--- smartcoin_status.sh (revision 407)
+++ smartcoin_status.sh (working copy)
@@ -336,12 +336,19 @@
echo "INITIALIZING SMARTCOIN....."
clear
+echo "" > ~/test.txt
+
while true; do
+ dte=`date "+%D %T"`
+ echo "$dte" >> ~/test.txt
LoadProfileOnChange
UI=$(ShowStatus)
clear
ShowHeader
echo -ne $UI
+ dte=`date "+%D %T"`
+ echo "$dte" >> ~/test.txt
sleep $statusRefresh
+ exit
done
I ran smartcoin... Here is the resulting file: (test.txt)
07/12/11 06:32:13
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT fk_profile FROM current_profile WHERE fk_machine=1;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT fk_profile FROM current_profile WHERE fk_machine=1;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT pk_miner FROM miner WHERE fk_machine=1 AND default_miner=1;
SELECT pk_profile, name, down FROM profile WHERE fk_machine='1' ORDER BY failover_order, pk_profile;
SELECT fk_device, fk_miner, fk_worker from profile_map WHERE fk_profile='7' ORDER BY fk_worker ASC, fk_device ASC
SELECT COUNT(*) FROM worker;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT fk_profile FROM current_profile WHERE fk_machine=1;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT pk_miner FROM miner WHERE fk_machine=1 AND default_miner=1;
SELECT pk_profile, name, down FROM profile WHERE fk_machine='1' ORDER BY failover_order, pk_profile;
SELECT fk_device, fk_miner, fk_worker from profile_map WHERE fk_profile='7' ORDER BY fk_worker ASC, fk_device ASC
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT name,device from device WHERE pk_device=1;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='dev_branch';
SELECT name,device from device WHERE pk_device=1;
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT name,device from device WHERE pk_device=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT value FROM settings WHERE data='dev_branch';
SELECT name,device from device WHERE pk_device=2;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT name,device from device WHERE pk_device=2;
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM machine WHERE pk_machine=1
SELECT name,device from device WHERE pk_device=2;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT value FROM settings WHERE data='dev_branch';
SELECT value FROM settings WHERE data='AMD_SDK_location';
Select name,device,type from device WHERE fk_machine=1 AND disabled=0 ORDER BY device ASC
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT name,device from device WHERE pk_device=3;
SELECT value FROM settings WHERE data='AMD_SDK_location';
SELECT value FROM settings WHERE data='phoenix_location';
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT value FROM settings WHERE data='phoenix_location';
SELECT name,device from device WHERE pk_device=3;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name,device from device WHERE pk_device=3;
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT name, path,launch FROM miner WHERE pk_miner=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT fk_profile FROM current_profile WHERE fk_machine=1;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT fk_profile FROM current_profile WHERE fk_machine=1;
SELECT value FROM settings WHERE data='donation_start';
SELECT value FROM settings WHERE data='donation_time';
SELECT pk_miner FROM miner WHERE fk_machine=1 AND default_miner=1;
SELECT pk_profile, name, down FROM profile WHERE fk_machine='1' ORDER BY failover_order, pk_profile;
SELECT fk_device, fk_miner, fk_worker from profile_map WHERE fk_profile='7' ORDER BY fk_worker ASC, fk_device ASC
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=1;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=2;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=2;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=2;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=3;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=3;
SELECT user,pass,pool.server, pool.port, pool.name from worker LEFT JOIN pool ON worker.fk_pool = pool.pk_pool WHERE pk_worker=4;
SELECT name FROM device WHERE pk_device=3;
SELECT down,failover_count FROM profile WHERE pk_profile='7';
UPDATE profile SET failover_count='3' WHERE pk_profile='7';
07/12/11 06:32:33
Now this shows, that in 0.20 seconds, 110 database queries are executed. Multiply by 5 to get queries per second. 550 queries per second is definitely not "low traffic" database stuff (expecially for a CLI utility!)
The locking problem happens because, as you have stated, select queries can happen concurrently, and in fact, if you look at the test.txt i just posted, most all of the queries in the status script are select queries.
The problem is that sqlite requires exclusivity when doing update/insert statements (such as when you add/edit/delete something from the control screen or the update patch system alters the schema). When trying one of these insert/update statements and there is currently a select statement querying the database, the insert/update can't get exclusivity, and errors out. And as you can see, with 500+ queries per second, if you try an insert/update statement, the chances of a select statement happening from other code is at that very moment is very high (though this has been purely timing based, as the status loop currently doesn't run full-out with a built-in delay)
In the end though, it seems like the newer approach is working like it should be, so all is good!