I wonder how useful it would be to begin running file against the newly etched data and then starting crude descriptive analysis by determining the file type.
We would end up with a dataset linking them to an RDF graph
From there a curator would be able to sign the graph, the precise semantics of this act are yet to be determined (but would be expressed in RDF)
Eventually, when the structure for managing the collection emerges, we can etch an RDF graph as a daily index to aid in discovery of the new data.
Obviously, this would apply to only data which may be decoded where it is either plain or the key has been made public There're fewer details, to be sure but there's still descriptive metadata such as number of bytes, datestamp, address of tx creator, it all has value for the curation process
That's do-able. As edited, I'm currently using Fuseki to store the RDF graphs that I create from mapping the acyclic directed graph of the blockchain into the acyclic directed graph that is RDF <- that's a
mapping. Technology such as
Fresnel (a display vocabulary in RDF) completes the look. Okay, I'm not yet publishing indices
There'll be fun and games working out whether some piece of binary is actually what it seems/claims to be. I wonder if any of them will make it into the category
"Things I won't work with".
Cheers
Graham