Current predictive models more or less generate texts word for word. Simplistically put, at each point there's a list of weighted candidate words, one of which is stochastically chosen (according to some distribution). The weights can be gently adjusted with a "fingerprint", so that the algorithm balances similar words with a statistically detectable frequency (the "fingerprint"). In texts with more than a thousand words, the probability that this would occur randomly is low (depending on how strongly the "fingerprint" was allowed to alter the weights).