0.01 of a confirmation on your network is actually worse than 0 confirmations on the Bitcoin network. With 0 confirmations you are protected somewhat by the TCP-level network, but anyone can reverse a transaction with 0.01 confirmation, overruling the TCP-level network.
Yes, as a payment recipient you would still have to wait until the cost of creating a conflicting parallel stream with a greater proof of work than your stream was greater than the value of the payment. But for small payments, that might not be very long at all. Right now, a 10-minute block is worth somewhere in the neighbourhood of $45.
It's a lot more data that lightweight clients will have to download. There's an 80-byte header per block that clients need. If this was required for every transaction, then lightweight clients would have to download about 17MB more data currently. This will become a lot more significant as the number of transactions per block increases. Also, if you have multiple "previous block" hashes in each transaction, you'll need headers that are much larger than 80 bytes. Normal clients will quickly lose the ability to send transactions.
Ah, I was only considering full clients. Fair point. With my proposal, if you didn't have enough bandwidth to receive all the transactions, you wouldn't be able to verify the proof of work in the chain.
There is no independent rate of block creation or target difficulty anymore. Just one block per transaction, whenever a transaction happens, with whatever proof of work the submitter can generate. So "blocks" will get generated continually. ("Block" isn't a suitable word anymore, because it implies a grouping of transactions, which isn't the case here -- it makes more sense to just say "transaction" .)
OK then, how do you control the rate of currency inflation? Seems as though you are talking about a fundamentally different system that uses some of the same tools.
Normal "blocks" wouldn't generate coins anymore, since there is no target difficulty. You could still have coin-generating blocks, but they would only contain the generating transaction, and have to meet a target difficulty. Two generating transactions in parallel streams would conflict. See my post re: generating transactions.
It is a somewhat different system in that it shifts the burden of proof of work from a few block-computing machines, to anyone who wishes to record a transaction.