As you can see, as memory exponentially decreases integer ops exponentially increase. He was easily able to get the memory usage into the kilobytes and still crank out hashes. I'd guess that exactly the same is true with N=2^10, r=1, p=1 too. It's the same balancing act you run into no matter what value you use for N or r; at higher N you may increase the difficulty by a smaller constant factor, but overall I doubt increasing N or r will make scrypt much more FPGA/ASIC unfriendly when they finally iron out the FPGA implementation.
Yes that is the space-time tradeoff and they used it to reduce the memory requirements to roughly what LTC Scrypt requires EXCEPT to do so requires a 100x increase in integer performance. If anything you just showed how weak LTC Scrypt is. Another way to look at it is say you had a FPGA card with output of X kh/s using the full scratchpad size of 128KB. Now trying to run N 2^14 you don't have sufficient memory or bandwidth but like the chart shows you could use the space-time tradeoff to reduce the memory requirement to 128KB. Great the memory requirement is similar to LTC Scrypt ... EXCEPT you now need either a FGPA with 100x the integer performance (how much do you think that is going to increase the cost) OR you are going to have 1/100th the hashrate.