Discussion:
[Wikidata] Looking for "data quality check" bots
Ettore RIZZA
2018-09-26 12:31:47 UTC
Permalink
Dear all,

I'm looking for Wikidata bots that perform accuracy audits. For example,
comparing the birth dates of persons with the same date indicated in
databases linked to the item by an external-id.

I do not even know if they exist. Bots are often poorly documented, so I
appeal to the community to get some example.

Many thanks.

Ettore Rizza
Federico Leva (Nemo)
2018-09-26 17:00:53 UTC
Permalink
Post by Ettore RIZZA
I'm looking for Wikidata bots that perform accuracy audits. For example,
comparing the birth dates of persons with the same date indicated in
databases linked to the item by an external-id.
This is mostly a screenscraping job, because most external databases are
only accessibly in unstructured or poorly structured HTML form.

Federico
Paul Houle
2018-09-26 18:47:42 UTC
Permalink
"Poorly structured" HTML is not all that bad in 2018 thanks to HTML 5
(which builds the "rendering decisions made about broken HTML from
Netscape 3" into the standard so that in common languages you can get
the same DOM tree as the browser)

If you try to use an official or unofficial API to fetch data from some
service in 2018 you will have to add some dependencies and you just
might open a can of whoop-ass that will make you reinstall Anconda or
maybe you will learn something you'll never be able to unlearn about how
XML processing changed between two minor versions of the JDK

On the other hand I have often dusted off the old HTML-based parser I
made for Flickr and found I could get it to work for other media
collections, blogs, etc. by just changing the "semantic model" embodied
in the application which could be as simple as some function or object
that knows something about the structure of the URLs some documents.

I cannot understand why so many standards have been pushed to integrate
RDF and HTML that have gone nowhere but nobody has promoted the clean
solution of "add a css media type for RDF" that marks the semantics of
HTML up the way JSON-LD works.

Often though if you look it that way much of the time these days
matching patterns against CSS gets you most of the way there.

I've had cases where I haven't had to change the rule sets much at all
but none of them have been more than 50 lines of code, all much less.



------ Original Message ------
From: "Federico Leva (Nemo)" <***@gmail.com>
To: "Discussion list for the Wikidata project"
<***@lists.wikimedia.org>; "Ettore RIZZA" <***@gmail.com>
Sent: 9/26/2018 1:00:53 PM
Subject: Re: [Wikidata] Looking for "data quality check" bots
Post by Federico Leva (Nemo)
Post by Ettore RIZZA
I'm looking for Wikidata bots that perform accuracy audits. For
example, comparing the birth dates of persons with the same date
indicated in databases linked to the item by an external-id.
This is mostly a screenscraping job, because most external databases
are only accessibly in unstructured or poorly structured HTML form.
Federico
_______________________________________________
Wikidata mailing list
https://lists.wikimedia.org/mailman/listinfo/wikidata
Ettore RIZZA
2018-09-26 19:26:52 UTC
Permalink
Hi,

Wikidata is obviously linked to a bunch of unusable external ids, but also
to some very structured data. I'm interested for the moment in the state of
the art - even based on poor scraping, why not?.

I see for example this request for permission
<https://www.wikidata.org/wiki/Wikidata:Requests_for_permissions/Bot/Symac_bot_4>
for a bot able to retrieve information on the BNF (French national library)
database. It has been refused because of copyright's issues, but simply
checking the information without extracting anything is allowed, isn't?
Post by Paul Houle
"Poorly structured" HTML is not all that bad in 2018 thanks to HTML 5
(which builds the "rendering decisions made about broken HTML from
Netscape 3" into the standard so that in common languages you can get
the same DOM tree as the browser)
If you try to use an official or unofficial API to fetch data from some
service in 2018 you will have to add some dependencies and you just
might open a can of whoop-ass that will make you reinstall Anconda or
maybe you will learn something you'll never be able to unlearn about how
XML processing changed between two minor versions of the JDK
On the other hand I have often dusted off the old HTML-based parser I
made for Flickr and found I could get it to work for other media
collections, blogs, etc. by just changing the "semantic model" embodied
in the application which could be as simple as some function or object
that knows something about the structure of the URLs some documents.
I cannot understand why so many standards have been pushed to integrate
RDF and HTML that have gone nowhere but nobody has promoted the clean
solution of "add a css media type for RDF" that marks the semantics of
HTML up the way JSON-LD works.
Often though if you look it that way much of the time these days
matching patterns against CSS gets you most of the way there.
I've had cases where I haven't had to change the rule sets much at all
but none of them have been more than 50 lines of code, all much less.
------ Original Message ------
To: "Discussion list for the Wikidata project"
Sent: 9/26/2018 1:00:53 PM
Subject: Re: [Wikidata] Looking for "data quality check" bots
Post by Federico Leva (Nemo)
Post by Ettore RIZZA
I'm looking for Wikidata bots that perform accuracy audits. For
example, comparing the birth dates of persons with the same date
indicated in databases linked to the item by an external-id.
This is mostly a screenscraping job, because most external databases
are only accessibly in unstructured or poorly structured HTML form.
Federico
_______________________________________________
Wikidata mailing list
https://lists.wikimedia.org/mailman/listinfo/wikidata
_______________________________________________
Wikidata mailing list
https://lists.wikimedia.org/mailman/listinfo/wikidata
Maarten Dammers
2018-09-29 11:02:34 UTC
Permalink
Hi Ettore,
Post by Ettore RIZZA
Dear all,
I'm looking for Wikidata bots that perform accuracy audits. For
example, comparing the birth dates of persons with the same date
indicated in databases linked to the item by an external-id.
Let's have a look at the evolution of automated editing. The first step
is to add missing data from anywhere. Bots importing date of birth are
an example of this. The next step is to add data from somewhere with a
source or add sources to existing unsourced or badly sourced statements.
As far as I can see that's where we are right now, see for example edits
like
https://www.wikidata.org/w/index.php?title=Q41264&type=revision&diff=619653838&oldid=616277912
is . Of course the next step would be to be able to compare existing
sourced statements with external data to find differences. But how would
the work flow be? Take for example Johannes Vermeer (
https://www.wikidata.org/wiki/Q41264 ). Extremely well documented and
researched, but
http://www.getty.edu/vow/ULANFullDisplay?find=&role=&nation=&subjectid=500032927
and https://rkd.nl/nl/explore/artists/80476 combined provide 3 different
dates of birth and 3 different dates of death. When it comes to these
kind of date mismatches, it's generally first come, first served (first
date added doesn't get replaced). This mismatch could show up in some
report. I can check it as a human and maybe do some adjustments, but how
would I sign it of to prevent other people from doing the same thing
over and over again?

With federated SPARQL queries it becomes much easier to generate reports
of mismatches. See for example
https://www.wikidata.org/wiki/Property_talk:P1006/Mismatches .

Maarten
Ettore RIZZA
2018-09-29 16:21:55 UTC
Permalink
Hi Maarten,

Thank you very much for your answer and your pointers. The page (which I
did not know existed) containing a federated SPARQL query is definitely
close to what I mean. It just misses one more step: deciding who is right.
If we look at the first result of the table
<https://www.wikidata.org/wiki/Property_talk:P1006/Mismatches> of
mismatches (Dmitry Bortniansky <https://www.wikidata.org/wiki/Q316505>) and
we draw a little graph, the result is:

[image: Diagram.png]

We can see that the error comes (probably) from Viaf, which contains a
duplicate, and from NTA, which obviously created an authority based on this
bad Viaf ID.

My research is very close to this kind of case, and I am very interested to
know what is already implemented in Wikidata.

Cheers,

Ettore Rizza
Post by Maarten Dammers
Hi Ettore,
Post by Ettore RIZZA
Dear all,
I'm looking for Wikidata bots that perform accuracy audits. For
example, comparing the birth dates of persons with the same date
indicated in databases linked to the item by an external-id.
Let's have a look at the evolution of automated editing. The first step
is to add missing data from anywhere. Bots importing date of birth are
an example of this. The next step is to add data from somewhere with a
source or add sources to existing unsourced or badly sourced statements.
As far as I can see that's where we are right now, see for example edits
like
https://www.wikidata.org/w/index.php?title=Q41264&type=revision&diff=619653838&oldid=616277912
is . Of course the next step would be to be able to compare existing
sourced statements with external data to find differences. But how would
the work flow be? Take for example Johannes Vermeer (
https://www.wikidata.org/wiki/Q41264 ). Extremely well documented and
researched, but
http://www.getty.edu/vow/ULANFullDisplay?find=&role=&nation=&subjectid=500032927
and https://rkd.nl/nl/explore/artists/80476 combined provide 3 different
dates of birth and 3 different dates of death. When it comes to these
kind of date mismatches, it's generally first come, first served (first
date added doesn't get replaced). This mismatch could show up in some
report. I can check it as a human and maybe do some adjustments, but how
would I sign it of to prevent other people from doing the same thing
over and over again?
With federated SPARQL queries it becomes much easier to generate reports
of mismatches. See for example
https://www.wikidata.org/wiki/Property_talk:P1006/Mismatches .
Maarten
_______________________________________________
Wikidata mailing list
https://lists.wikimedia.org/mailman/listinfo/wikidata
Continue reading on narkive:
Loading...