To validate is to compute, so indexing metadata for past validation events and caching any detailed payloads can save time and effort. Why index? To search. Why search? To find relevant (“likely valid”), ranked (“more likely valid”) results.
To validate is to compute, so indexing metadata for past validation events and caching any detailed payloads can save time and effort. Why index? To search. Why search? To find relevant (“likely valid”), ranked (“more likely valid”) results.
A few years ago, I discovered Mike Caulfield's The Garden and the Stream: A Technopastoral and understood why I wasn't happy with my blog. Blogs are streams, timelines of posts. Each post has a timestamp, and is considered "finished". Later changes are technically possible, but culturally limited to corrections. A blog post is considered a published essay, and therefore comes with a date of publication.
Given a fip:Metadata-schema and a validator for it, such as a sh:Validator or a JSON Schema, how do you determine that the validator is…valid? That it speaks the desired fip:Knowledge-representation-language, that it knows all the terms in a desired fip:Structured-vocabulary and checks their usage against a desired fip:Semantic-model? In other words, that it adheres to a doap:Specification? I do not know.
What conveys that data has been validated or is yet to be validated? How do you identify the nature and process of validation for a given digital object? Who is involved? What auxiiary resources are involved? Is the process: Do-it-yourself, with (implicit or explicit) references to validation assets? Do-it-with-you, with references to validation services? Do-it-for-you, with references to validation results and/or signoffs?
This week on Machine-Centric Science, I interviewed Martynas Jusevičius, currently at AtomGraph and based in Copenhagen, Denmark.
At a base level, an identifier is simple to trace – it is the sequence (modulo concurrency) of assertions of which it is a part. In fact, this can be the basis for tracing the representation of a “thing” as the flock of relationships between identifiers, i.e. metadata, that waxes and wanes in association with “the” identifier of the thing.
Good identifiers are opaque, so translation is by association – owl:sameAs, skos:exactMatch, or some other relationship. Translation doesn’t follow from reading a sign, but from retrieving a sense. If metadata is relationships between identifiers, 1 then metadata is the medium of conceptual convergence.
Where do you look for identifiers? If you’re looking for a URI, the IANA has a registry of schemes, like https, mailto, and tel. These days, to resolve an identifier, you generally use the https scheme, which has an authority component in its URI format.
How do you validate that an identifier service provides global uniqueness of minted keys, persistence of bindings, and resolution of keys to descriptive metadata? If you know that a given ID provided by a service is unique, that tells you nothing at all about the uniqueness of another ID provided by that service.
Day 1 of my five-week experiment to elaborate on FAIR-enabling services, and I already find myself fallen flat on my face.