So really all I know is that they intended to "scale" by increasing the blocksize. ... Has anything changed? Are they proposing realistic alternatives to lightning for scaling? Are there new developments that I should be aware of?
For the time being, mostly blocksize. As blocksize is all that is needed at this stage.
The BCH community, however, did demonstrate conclusively that:
- generic home computer HW on consumer broadband running bitcoind can handle ~100 tx/s
- above ~100 tx/s there is still plenty of CPU BW
- a fix to bitcoind's naive threading model increases performance
- bitcoind with fixed threading on same HW & net can handle ~500 tx/s
The BCH community is doing the work of re-enabling a host of opcodes that were thrown overboard years ago before any real analysis was performed upon them. Oh, I guess that's not scaling.
Of course, it was that same community (though pre-fork) that first implemented Xthin, which reduces network bandwidth consumption by nearly 2x.
There is discussion of adopting Lightning. Not much traction for that. There's orders of magnitude that can be gained by a simple blocksize change first.