what are the implications of this "quadratic TX validation" you guys are talking about?
we can't have TX with a huge amount of inputs? or somthing?
Exactly. If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.
zillions of inputs!

this i can understand
This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109 2 MB hard fork used by Bitcoin Classic is necessary and SegWit does not help with:
https://www.reddit.com/r/btc/comments/47f0b0/f2pool_testing_classic/d0deh29The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):
1 MB (status quo): 2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit: 2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF: 10 seconds (1.3 GB hashed)
SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.