Geschrieben von Lydia Pintscher

Visualizing history with automated event maps

German summary: Fred Johansen hat eine Webseite erstellt mit der sich, basierend auf Daten in Wikidata, einfach historische Ereignisse zeitlich und räumlich einordnen lassen. Hier erzählt er über die Seite und seine Arbeit daran.

The following post is a guest blog by Fred Johansen about EventZoom.

Just as today’s online maps are being continually updated, historical maps can be automatically generated and updated to reflect our ever-evolving knowledge about the past. As an example, please allow me to tell you about a project that I’m working on. Recently I implemented an event visualization site which accepts geolocation data combined with info about time spans of events, and renders the input as points on a map zoomable in time and space. Each such point is an object with a title, description, latitude / longitude and a time, as well as a reference back to its source. But what source should be used to fill this framework with data? Even though this is a tool born outside of the Wikimedia world, so far the best content I’ve found for it is Wikidata – more specifically, the Wikidata API. By importing data about events that are part of larger events all defined in Wikidata, with the restriction that they contain a start or end date as well as a location, that’s all the data that’s needed for representation in this kind of dynamic historical map.

Extracting data from the Wikidata API works like a charm. Sometimes, of course, some data might be missing from Wikidata. For example, an event may contain an end date, but no start date. So, what’s fantastic about Wikidata is that it’s easy to simply extend its data by adding the missing fact. In addition to helping in increasing the data of Wikidata, this also improves the overall possibilities for visualization.

This very activity serves as a positive feedback loop: The visualization on a map of, for example, the events of a war makes errors or omissions quite obvious, and serves as an incentive to update Wikidata, and finally to trigger the re-generation of the map.

The site I’m referring to here is – currently in Beta and so far containing 82 major event maps and growing. You can extend it yourself by triggering the visualization of new maps: When you do a search for an event, for example a war, and the Search page reports it as missing, you can add it directly. All you need is its Q-ID from Wikidata. Paste this ID into the given input field, and the event will be automatically imported from the Wikidata API, and a map automatically generated – with the restriction that there must exist some ‘smaller’ events that contain time & location data and are part (P361) of the major event. Those smaller events become the points on our map, with automatic links back to their sources. As for the import itself, for the time being, it also depends on, but I expect that will change in the future.

Although you can always click Import to get the latest info from Wikidata, an automatic update is also in the pipeline, to trigger a re-import whenever the event or any of its constituent parts have changed in Wikidata. As for other plans, at the very least our scope should encompass all the major events of history. Here, wars represent a practical starting point, in so far as they consist of events that are mostly bounded by very definite time spans and locations, and so can be defined by those characteristics. The next step would be to extend the map visualization to other kinds of events – as for Wikidata, it could be interesting to visualize all kinds of items that can be presented with a combination of geolocations and temporal data, and that can be grouped together in meaningful ways.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 3,33 von 5)

Using Wikidata to Improve the Medical Content on Wikipedia

German summary: Vor einigen Tagen wurde eine wissenschaftliche Veröffentlichung publiziert die sich damit beschäftigt wie Wikipediaartikel zu medizinischen Themen durch Wikidata verbessert werden können. Hier stellen sie die Veröffentlichung und ihre Ergebnisse vor.


This is a guest post by Alexander Pfundner, Tobias Schönberg, John Horn, Richard D. Boyce and Matthias Samwald. They have published a paper about how medical articles on Wikipedia can be improved using Wikidata.

An example of an infobox that shows drug-drug-interactions from Wikidata. Including this information could be of significant benefit to patients around the world.

The week before last a study was published in the Journal of Medical Internet Research that investigates how Wikidata can help to improve medical information on Wikipedia. The researchers from the Medical University of Vienna, the University of Washington and the University of Pittsburgh that carried out the study are active members of the Wikidata community.

The study focuses on how potential drug-drug interactions are represented on Wikipedia entries for pharmaceutical drugs. Exposure to these potential interactions can severely diminish the safety and effectiveness of therapies. Given the fact that many patients and professionals often rely on Wikipedia to read up on a medical subject, the quality, completeness and relevance of these interactions can significantly improve the situation of patients around the world.

In the course of the study, a set of high-priority potential drug-drug-interactions were added to Wikidata items of common pharmaceutical drugs (e.g. Ramelteon). The data was then compared to the existing information on the English Wikipedia, revealing that many critical interactions were not explicitly mentioned. It can be expected that the situation is probably worse for many other languages. Wikidata could play a major role in alleviating this situation: Not only does a single edit benefit all 288 languages of Wikipedia, but the tools for adding and checking data are much easier to handle. In addition, adding qualifiers (property-value pairs that further describe the statement, e.g. the severity of the interaction) and sources to each statement puts the data in context and makes cross-checking easier . In the study Wikidata was found to be capable to act as a repository for this data.

The next part of the study investigated how potential drug-drug interaction information in Wikipedia could be automatically written and maintained (i.e. in the form of infoboxes or within a paragraph). Working with the current API and modules, investigators found that the interface between Wikidata and Wikipedia is already quite capable, but that large datasets still require better mechanisms to intelligently filter and format the data. If the data is displayed in an infobox, further constraints come from the different conventions on how much information can be displayed in an infobox, and whether large datasets can be in tabs or collapsible cells.

Overall the study comes to the conclusion that, the current technical limitations aside, Wikidata is capable to improve the reliability and quality of medical information on all languages of Wikipedia.

The authors of the study would like to thank the Wikidata and Wikipedia community for all their help. And additionally the Austrian Science Fund and the United States National Library of Medicine for funding the study.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (4 Bewertungen, Durchschnitt: 5,00 von 5)

Improving data quality on Wikidata – checking what we have

German summary: Ein Team von Studenten des Hasso Plattner Instituts in Potsdam arbeitet aktuell mit Wikimedia Deutschland an Werkzeugen um die Datenqualität auf Wikidata zu verbessern und zu sichern. In diesem Beitrag stellen sie ihre beiden Projekte vor: die Prüfung von Wikidatas Daten auf Konsistenz mit sich selbst sowie die Prüfung von Wikidatas Daten gegen andere Datenbanken.


 Hello, we are the Wikidata Quality Team. We are a team of students from Hasso Plattner Institute in Potsdam, Germany. For our bachelor project we are working together with the Wikidata development team to ensure high quality of the data on Wikidata.

Wikidata provides a lot of structured data open to everyone. Quite a lot. Actually, they are providing an enormous amount of data approaching the mark of 13.5 million items, each of which has numerous statements. The data got into the system by diligent people and by bots, and neither people nor bots are known for infallibility. Errors are made and somehow we have to find and correct them. Besides erroneous data, incomplete data is another problem. Imagine you are a resident of Berlin and want to improve the Wikidata item about the city. You go ahead and add its highest point (Müggelberge), its sister cities (Los Angeles, Madrid, Istanbul, Warsaw and 21 others) and its new head of government (Michael Müller). As you do it the correct way, you are using qualifiers and references. Good job, but did you think of adding Berlin as the sister city of 25 cities? Although the data you entered is correct, it is incomplete and you have—both unwilling and unknowingly—introduced an inconsistency. And that’s only, assuming you used the correct items and properties and did not make a typo while entering a statement. And thirdly, things change. Population numbers vary, organizations are dissolved and artists release new albums. Wikidata has the huge advantage that this change only has to be made in one place, but still: Someone has to do it and even more importantly, someone has to become aware of it.

Facing the problems mentioned above, two projects have emerged. People using Wikidata are adding identifiers of external databases like GND, MusicBrainz and many more. So why not make use of them? We are developing a tool that scans an item for those identifiers and then searches in the linked databases for data against which it compares the items statements. This does not only help us verify Wikidata’s content and find mismatches that could indicate errors, but also makes us aware of changes. MusicBrainz is a specialist for artists and composers, GND for data related to people, and these specialists‘ data is likely to be up to date. Using their databases to cross-check, we hope to be able to have the latest data of all fields represented in Wikidata.

The second projects focuses on using constraints on properties. Here are some examples to illustrate what this means:

  • Items that have the property “date of death” should also have “date of birth“, and their respective values should not be more than 150 years apart
  • Properties like “sister city“ are symmetric, so items referenced by this statement should also have a statement “sister city“ linking back to the original item
  • Analogously, properties like “has part” and “part of” are inverse and should be used on both items in a lot of cases
  • Identifiers for IMDb, ISBN, GND, MusicBrainz etc. always follow a specific pattern that we can verify
  • And so on…

Checking these constraints and indicating issues when someone visits an items page, helps identify which statements should be treated with caution and encourages editors to fix errors. We are also planning to provide ways to fix issues (semi-)automatically (e.g. by adding the missing sister city when he is sure, that the city really has this sister city). We also want to check these constraints when someone wants to save a new entry. This hopefully prevents errors from getting into the system in the first place.

That’s about it – to keep up with the news visit our project page. We hope you are fond of our project and we appreciate your feedback! Contact information can also be found on the project page.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (11 Bewertungen, Durchschnitt: 5,00 von 5)

Scaling Wikidata: success means making the pie bigger

German summary: Wikidata wird größer und erfolgreicher. Im nächsten Jahr müssen wir Strategien und Werkzeuge entwickeln um Wikidata zu skalieren. In diesem Beitrag lege ich meine Überlegungen dazu dar.


Wikidata is becoming more successful every single day. Every single day we cover more topics and have more data about them. Every single day new people join our community. Every single day we provide more people with more access to more knowledge. This is amazing. But with any growth comes growing pains. We need to start thinking about them and build strategies for dealing with them.

Wikidata needs to scale in two ways: socially and technically. I will not go into the details of technical scaling here but instead focus on the social scaling. With social scaling I mean enabling all of us to deal with more attention, data and people around Wikidata. There are several key things that need to be in place to make this happen:

  • A welcome wagon and good documentation for newcomers to help them become part of the community and understand our shared norms, values, policies and traditions.
  • Good tools to help us maintain our data and find issues quickly and deal with them swiftly.
  • A shared understanding that providing high-quality data and knowledge is important.
  • Communication tools like the weekly summary and Project chat that help us keep everyone on the same page.
  • Structures that scale with enough people with advanced rights to not overwhelm and burn out any one of them.

We have all of these in place but all of them need more work from all of us to really prepare us for what is ahead over the next months and years.

One of the biggest pressures Wikidata is facing now is organisations wanting to push large amounts of data into Wikidata. This is great if it is done correctly and if it is data we truly care about. There are key criteria I think we should consider when accepting large data donations:

  • Is the data reliable, trustworthy, current and published somewhere referencable? We are a secondary database, meaning we state what other sources say.
  • Is the data going to be used? Data that is not used is exponentially harder to maintain because less people see it.
  • Is the organization providing the data going to help keep it in good shape? Or are other people willing to do it? Data donations need champions feeling responsible for making them a success in the long run.
  • Is it helping us fix an important gap or counter a bias we have in our knowledge base?
  • Is it improving existing topics more than adding new ones? We need to improve the depth of our data before we continue to expand its breadth.

So once we have this data how can we make sure it stays in good shape? Because one of the crucial points for scaling Wikidata is quality of and trust in the data on Wikidata. How can we ensure high quality of the data in Wikidata even on a large scale? The key pieces necessary to achieve this:

  • A community that cares about making sure the data we provide is correct, complete and up-to-date
  • Many eyes on the data
  • Tools that help maintenance
  • An understanding that we don’t have to have it all

Many eyes on the data. What does it mean? The idea is simple. The more people see and use the data the more people will be able to find mistakes and correct them. The more data from Wikidata is used the more people will get in contact with it and help keep it in good shape. More usage of Wikidata data in large Wikipedias is an obvious goal there. More and more infoboxes need to be migrated over the next year to make use of Wikidata. The development team will concentrate on making sure this is possible by removing big remaining blockers like support for quantities with units, access to data from arbitrary items as well as good examples and documentation. At the same time we need to work on improving the visibility of changes on Wikidata in the Wikipedia’s watchlists and recent changes. Just as important for getting more eyes on our data are 3rd-party users outside Wikimedia. Wikidata data is starting to be used all over the internet. It is being exposed to people even in unexpected places. What is of utmost importance in both cases is that it is easy for people to make and feed back changes to Wikidata. This will only work with well working feedback loops. We need to encourage 3rd-party users to be good players in our ecosystem and make this happen – also for their own benefit.

Tools that help maintenance. As we scale Wikidata we also need to provide more and better tools to find issues in the data and fix them. Making sure that the data is consistent with itself is the first step. A team of students is working with the development team now on improving the system for that. This will make it easy to spot people who’s date of birth is after their date of death and so on. The next step is checking against other databases and reporting mismatches. That is the other part of the student project. When you look at an item you should immediately see statements that are flagged as potentially problematic and review them. In addition more and more visualizations are being built that make it easy to spot outliers. One recent example is the Tree of Life.

An understanding that we don’t have to have it all. We should not aim to be the one and only place for structured open data on the web. We should strive to be a hub that covers important ground but also gives users the ability to find other more specialized sources. Our mission is to provide free access to knowledge for everyone. But we can do this just as well when we have pointers to other places where people can get this information. This is especially the case for niche topics and highly detailed data. We are a part of an ecosystem and we should help expand the pie for everyone by being a hub that points to all kinds of specialized databases. Why is this so important? We are part of a larger ecosystem. Success means making the pie bigger – not getting the whole pie for ourselves. We can’t do it all on our own.

If we keep all this in mind and preserve our welcoming culture we can continue to build something truly amazing and provide more people with more access to more knowledge every single day.

Improving the data quality and trust in the data we have will be a major development focus of the first months of 2015.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (11 Bewertungen, Durchschnitt: 4,73 von 5)

Wikidata auf das nächste Level bringen

Im Frühjahr 2010 traf ich Denny und Markus zum ersten Mal in einem kleinen Raum am Karlsruher Institut für Technologie um über Semantic MediaWiki, seine Entwicklung und seine Community zu sprechen. Ich war von der Idee, die sie seit 2005 vorantreiben, fasziniert: strukturierte Daten in die Wikipedia bringen. Als die Zeit kam das Team für die Entwicklung von Wikidata zusammenzustellen und Denny an mich herantrat um Community Kommunikation dafür zu übernehmen konnte ich nicht nein sagen. Das Projekt klang faszinierend und das Timing war perfekt, da ich gerade dabei war mein Informatikstudium abzuschließen. In den eineinhalb Jahren seit dem haben wir etwas Erstaunliches erreicht. Wir haben eine großartige technische Basis für Wikidata geschaffen und noch viel wichtiger wir haben eine großartige Community um Wikidata herum aufgebaut. Wir haben den Grundstein für etwas Außergewöhnliches gelegt. Ich hätte mir nie erträumt wohin dieses eine Meeting in einem kleinen Zimmer in Karlsruhe mich jetzt gebracht hat.

Von jetzt an übernehme ich die Produktverantwortung für Wikidata als dessen Produktmanagerin.

Bis heute haben wir den Grundstein für etwas Außergewöhnliches gelegt. Aber gleichzeitig gibt es eine Menge Dinge, and denen wir alle zusammen noch arbeiten müssen. Die Bereiche, auf die wir uns jetzt konzentrieren müsen, sind:

  • Vertrauen aufbauen in unsere Daten. Das Projekt ist noch jung und die Wikipediaeditoren und andere sind immer noch vorsichtig bei der Verwendung von Daten aus Wikidata im großen Maßstab. Wir müssen Werkzeuge und Prozesse aufbauen um unsere Daten vertrauenswürdiger zu machen.
  • Verbesserung der User Experience um Wikidata. Wikidata bis zu dem Punkt aufzubauen an dem es heute ist war eine enorme technische Aufgabe, die wir in relativ kurzer Zeit gelöst haben. Dies bedeutete allerdings, dass an einigen Stellen die User Experience nicht so viel Aufmerksamkeit bekommen hat. Wir müssen die das überall in Wikidata verbessern.
  • Wikidata verständlicher machen. Wikidata ist ein sehr geekiges und technisches Projekt. Doch um wirklich erfolgreich zu sein, muss es einfach sein, die Ideen dahinter zu verstehen.

Diese Punkte sind entscheidend damit Wikidata den Einfluss und die Bedeutung erreichen kann die wir uns alle für es wünschen. Und wir alle werden daran arbeiten müssen – sowohl im Entwicklerteam als auch im Rest der Wikidata Community.

Lasst uns dafür sorgen, dass es eine Freude ist Wikidata zu benutzen und es an Orten und auf Arten und Weisen benutzt wird die wir uns jetzt noch gar nicht vorstellen können.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (10 Bewertungen, Durchschnitt: 3,80 von 5)

Pushing Wikidata to the next level

(Die deutsche Version dieses Beitrags ist hier.)

In early 2010 I met Denny and Markus for the first time in a small room at the Karlsruhe Institute of Technology to talk about Semantic MediaWiki, its development and its community. I was intrigued by the idea they’d been pushing for since 2005 – bringing structured data to Wikipedia. So when the time came to assemble the team for the development of Wikidata and Denny approached me to do community communications for it there was no way I could have said no. The project sounded amazing and the timing was perfect since I was about to finish my studies of computer science. In the one and a half years since then we have achieved something amazing. We’ve built a great technical base for Wikidata and much more importantly we’ve built an amazing community around it. We’ve built the foundation for something extraordinary. On a personal level I could never have dreamed where this one meeting in a small room in Karlsruhe has taken me now.

From now on I will be taking over product ownership of Wikidata as its product manager.

Up until today we’ve built the foundation for something extraordinary. But at the same time there are still a lot of things that need to be worked on by all of us together. The areas that we need to focus on now are:

  • Building trust in our data. The project is still young and the Wikipedia editors and others are still wary of using data from Wikidata on a large scale. We need to build tools and processes to make our data more trustworthy.
  • Improving the user experience around Wikidata. Building Wikidata to the point where it is today was a tremendous technical task that we achieved in a rather short time. This though meant that in places the user experience has not gotten as much attention. We need to make the experience of using Wikidata smoother.
  • Making Wikidata easier to understand. Wikidata is a very geeky and technical project. However to be truly successful it will need to be easy to get the ideas behind it.

These are crucial for Wikidata to have the impact we all want it to have. And we will all need to work on those – both in the development team and in the rest of the Wikidata community.

Let’s make Wikidata a joy to use and get it used in places and ways we can’t even imagine yet.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,20 von 5)

Wikidata und andere Technik auf der Wikimania

Denny, Lydia und Daniel (von Fabrice Florin, CC-by-sa 2.0)

Ich bin zurück von einer großartigen Wikimania. Zunächst einmal vielen Dank an alle, die geholfen haben die Veranstaltung zu organisieren. Es war sehr gut organisiert und eine nützliche und produktive Veranstaltung. Ich war dort, um alles rund um Wikidata sowie neue Technik wie den VisualEditor und Flow und ihre Auswirkungen auf die deutschsprachige Wikipedia zu diskutieren.

Ich hatte den Eindruck, dass Wikidata und der Visual Editor in aller Munde waren während dieser Wikimania. Egal zu welchem Vortrag, welcher Diskussionsrunde oder welchem Abendessen ich ging – jedes einzelne von ihnen hatte Wikidata und den VisualEditor zum in irgendeiner Form zum Thema. Es ist großartig zu sehen wie die Wikimedia-Gemeinschaft Wikidata in seine Reihen aufnimmt. Und der VisualEditor – auch wenn noch unfertig – scheint auch sehr schnell zu diesem Punkt zu kommen.
Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)

Wikidata and other technical bits at Wikimania

Denny, Lydia and Daniel (by Fabrice Florin, CC-by-sa 2.0)

I’m back from an amazing Wikimania. First of all thank you to everyone who helped make the event happen. It was very well organized and an overall useful and productive event. I was there to discuss everything Wikidata as well as new technology like the Visual Editor and Flow and how they affect the German language Wikipedia.

It felt like Wikidata and the Visual Editor were on everyone’s mind during this Wikimania. No matter which talk or panel or dinner I went to – every single one of them mentioned Wikidata and the Visual Editor in some way. It’s great to see the Wikimedia community embrace Wikidata as its sister project. And the VisualEditor – while still rough – it seems is getting to that point very quickly too.
Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,80 von 5)

Die Tool-Umgebung von Wikidata

Der folgende Beitrag ist ein Gastbeitrag von Magnus Manske, aktiver Toolentwickler rund um Wikidata und Autor der Software aus der später MediaWiki wurde.

Wikidata ist das jüngste Kind der Wikimedia-Familie und dient hauptsächlich als „Commons für Faktoide“, also als zentrales Archiv für die Schlüsseldaten der Wikipedia-Themen sowie für die Links zwischen den zahlreichen Sprachausgaben. Derzeitig umfasst Wikidata schon über 10 Mio. Items, mehr als jede Sprachausgabe von Wikipedia (momentan existieren 4,2 Mio. Einträge in der enlischsprachigen Wikipedia). Und obwohl Wikidata – genau wie Commons auch – hauptsächlich der Unterstützung von Wikipedia und ihren Schwesterprojekten dient, bietet es einen wesentlichen Mehrwert: Wikidata stellt maschinenlesbare, vernetzte Daten über Millionen von Themen in vielen Sprachen über eine Standard-Schnittstelle (API) bereit.

Von den bescheidenen Anfängen der Erforschung künstlicher Intelligenz über heutige Anwendungen wie Google Knowledge Graph und Wolfram Alpha bis hin zu zukünftigen Systemen wie „intelligenten“ User Agents oder (wer weiß?) der Singularität – ein derart strukturierter Datenbestand gilt schon lange als der „Heilige Gral“ der Computerwissenschaft.

Der Umfang einer solchen Datensammlung kann allerdings beängstigend sein. Während einige Unternehmen über genügend Mittel verfügen, um derartige Projekte zu finanzieren, haben andere Gruppen (wie etwa DBpedia) versucht, sich die in Wikipedia gespeicherten und frei zugänglichen Daten zunutze zu machen. Aber Wikidatas Mix aus Bearbeitungen von Menschen und Bots sorgt zusammen mit dem umfangreichen Wissensbestand von Wikipedia und einer Reihe innovativer Funktionen, wie z. B. mehrere Eigenschaftstypen, Quellenanmerkung und Qualifizierern, für einen Qualitätssprung im „Web of Knowledge“. Und mittlerweile gibt es einige Tools, die sich dieser Funktionen bedienen und das Potenzial des Web of „Knowledge“ erschließen. Eine relativ vollständige Liste dieser Tools gibt es hier.

Das Prinzip von Wikidata

Stammbaum von Johann Seabastian Bach

Für ein sehr anschauliches Beispiel für solch ein Tool schau dir Mozart an. Das Tool ermöglicht nicht nur eine bloße Erfassung und Darstellung der Daten, sondern es „versteht“, dass dieses Item eine Person ist, und fragt zusätzliche, personenspezifische Items wie etwa Verwandte ab. Darüber hinaus zeigt es personenspezifische Informationen, wie z. B. Normdaten, die sich nicht auf andere Items beziehen. Mozarts Kompositionen werden aufgelistet und können auf der Seite wiedergegeben werden, wenn sie als Datei in Commons vorliegen. Bis zu einem gewissen Grad können auch die Sprachinformationen in Wikidata genutzt werden, sodass dieselbe Seite (größtenteils) auf Deutsch aufgerufen werden kann.

Anstatt nur nach den direkten Verwandten zu suchen, kann ein Tool auch eine „Kette“ von bestimmten Eigenschaften zwischen Items verfolgen und einen „Item-Cluster“ – wie etwa einen Stammbaum (hübscher und umfangreicher Stammbaum von Mozart) – erzeugen. Der Stammbaum um John F. Kennedy herum enthält mehr als 10.000 Menschen. Auf ähnliche Weise kann ein Tool taxonomische Verbindungen zwischen Arten bis hin zu den Ursprüngen verfolgen und daraus einen vollständigen Lebensbaum (Achtung: riesige Seite!) erzeugen.

Diese Tools zeigen, dass Wikidata, obwohl es noch in den Kinderschuhen steckt, mit einem recht mäßigen Programmierungsaufwand komplexe Ergebnisse erzeugen kann. Eine futuristischere Demonstration der Möglichkeiten bietet das sprechende Wiri (Google Chrome empfohlen).

Dieses Item bearbeiten

Diejenigen, die bereits freiwillig an Wikimedia-Projekten mitgewirkt haben, wird es nicht überraschen, dass auch für Wikidata immer mehr Tools zum Editieren zur Verfügung stehen. Einige dieser Tools verfügen über eine Doppelfunktion zur Abfrage von Wikidata und Anzeige der Ergebnisse einerseits, und zum Aufzeigen noch ausstehender „To-dos“ andererseits. Bei einem Blick auf die Aufteilung der Genres von Fernsehserien auf Wikidata wird schnell erkennbar, dass weit mehr als die Hälfte überhaupt keinem Gerne zugewiesen ist. (Tipp: Klicke in dem Tortendiagramm auf ein Tortenstück, um die zugehörigen Items anzuzeigen. Kannst du Lost irgendeinem Genre zuordnen?).

Bei der Bearbeitung in Wikidata wird ein Item normalerweise durch Suche nach seinem Namen verlinkt. Schlechte Karten hat man aber, wenn man nach „John Taylor“ sucht, denn im Moment gibt es 52 Items mit dem Namen, aber ohne verwertbaren Beschreibungen dazu. Will man alle Items finden, die dasselbe Label verwenden, hilft der Terminator weiter. Dieses Tool stellt außerdem eine täglich aktualisierte Liste mit Items bereit, die denselben Titel, aber keine Beschreibung aufweisen.

Auf ähnliche Weise lassen sich Items nach Wikipedia-Kategorie suchen. Wer einen komplexeren Filter benötigt, sein eigenes Tool schreiben möchte oder nach einer Lösung zur Arbeitserleichterung sucht, kann ein Tool verwenden, das zum Beispiel Opern ohne Librettist findet (zur Änderung der Anfrage muss allerdings die URL bearbeitet werden).

Zusätzlich existieren viele Tools auf JavaScript-Basis, mit denen direkt in Wikidata gearbeitet werden kann. Ob man nun alle Sprachlinks oder die komplette Taxonomie aller Arten mit einem einzelnen Klick importieren, das gegenwärtige Item als weibliche Fußballspielerin aus Bosnien festlegen oder Eigenschaften eines Items auf alle Items in derselben Wikipedia-Kategorie übertragen möchte – für all diese Aufgaben stehen Tools zur Verfügung.

Und das ist erst der Anfang

Während die meisten dieser Tools zwar kaum mehr als Demos sind oder hauptsächlich Wikidata und seinen Editoren dienen, zeigen sie doch das außerordentliche Potenzial dieses Projekts. Es kann sein, dass auf Wikidata nicht sehr viel über den Erzherzog Ernst von Österreich zu erfahren ist, aber immerhin mehr als unter dem Eintrag in der englischsprachigen Wikipedia über ihn (der nämlich nicht existiert). Vielleicht reichen die Informationen aber aus, um einen Stub-Artikel zu schreiben. Und je mehr Daten hinzugefügt werden, je mehr Datentypen (Datum, Ort usw.) bereitgestellt werden und je effektiver sind die Möglichkeiten zur Abfrage von Wikidata, bin ich mir sicher, dass in naher Zukunft etliche und viel erstaunlichere Tools geschrieben werden – es sei denn, die Singularität schreibt sie für uns.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (6 Bewertungen, Durchschnitt: 5,00 von 5)

The Wikidata tool ecosystem

(Die deutsche Version dieses Artikels ist hier.)

The following is a guest post by Magnus Manske, active tool developer around Wikidata and author of the software that later evolved into MediaWiki.

Wikidata is the youngest child of the Wikimedia family. Its main purpose is to serve as a „Commons for factoids“, a central repository for key data about the topics on, and links between, the hundreds of language editions of Wikipedia. At time of writing, Wikidata already contains about 10 million items, more than any edition of Wikipedia (English Wikipedia currently has 4.2 million entries). But while, as with Commons, its central purpose is to serve Wikipedia and its sister projects, Wikidata has significant value beyond that; namely, it offers machine-readable, interlinked data about millions of topics in many languages via a standardized interface (API).

Such a structured data repository has long been a „holy grail“ in computer science, since the humble beginnings of research into artificial intelligence, to current applications like Google’s Knowledge Graph and Wolfram Alpha, and towards future systems like „intelligent“ user agents or (who knows?) the Singularity.

The scale of any such data collection is a daunting one, and while some companies can afford to pour money into it, other groups, such as DBpedia, have tried to harvest the free-form data stored in Wikipedia. However, Wikidata’s mixture of human and bot editing, the knowledge of Wikipedia as a resource, and evolving features such as multiple property types, source annotation, and qualifiers add a new quality to the web of knowledge, and several tools have already sprung up to take advantage of these, and to demonstrate its potential. A fairly complete list is available.

Views on Wikidata

Family tree of Johann Seabastian Bach

For a straight-forward example of such a tool, have a look at Mozart. This tool does not merely pull and display data about an item; it „understands“ that this item is a person, and queries additional, person-specific items, such as relatives. It also shows person-specific information that does not refer to other items, such as Authority Control data. Mozart’s compositions are listed, and can be played right on the page, if a file exists on Commons. To a degree, it can also use the language information in Wikidata, so you can request the same page in German (mostly).

Instead of looking only for direct relatives, a tool can also follow a „chain“ of certain properties between items, and retrieve an „item cluster“, such as a genealogical tree (pretty and heavy-duty tree for Mozart). The Wikidata family tree around John F. Kennedy contains over 10.000 people at time of writing. In similar fashion, a tool can follow taxonomic connections between species up to their taxonomic roots, and generate an entire tree of life (warning: huge page!).

These tools demonstrate that even in its early stages, Wikidata allows to generate complex results with a fairly moderate amount of programming involved. For a more futuristic demo, talk to Wiri (Google Chrome recommended).

Edit this item

Unsurprisingly to anyone who has volunteered on Wikimedia projects before, tools to help with editing are also emerging. Some have the dual function of interrogating Wikidata and displaying results, while at the same time informing about „things to do“. If you look at the genre of television series on Wikidata, you will notice that over half of them have no genre assigned. (Hint: Click on the „piece of pie“ in the pie chart to see the items. Can you assign a genre to Lost?).

When editing Wikidata, one usually links to an item by looking for its name. Bad luck if you look for „John Taylor“, for there are currently 52 items with that name but no discerning description. If you want to find all items that use the same term, try the Terminator; it also has (daily updated) lists with items that have the same title but no description.

Similarly, you can look for items by Wikipedia category. If you want some more complex filter, or want to write your own tool and look for something to ease your workload, there is a tool that can find, say, Operas without a librettist (you will need to edit the URL to change the query, though).

There are also many JavaScript-based tools that work directly on Wikidata. A single click to import all language links or species taxonomy from Wikipedia, find authority control data, declare the current item to be a female football player from Bosnia, or apply the properties of the current item to all items in the same Wikipedia category — tools for all of these exist.

This is only the beginning

While most of these tools are little more than demos, or primarily serve Wikidata and its editors, they nicely showcase the potential of the project. There might not be much you can learn about Archduke Ernest of Austria from Wikidata, but it is more than you would get on English Wikipedia (no article). It might be enough information to write a stub article. And with more statements being added, more property types (dates, locations) emerging, and more powerful ways to query Wikidata, I am certain we will see many, and even more amazing tools being written in the near future. Unless the Singularity writes them for us.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 5,00 von 5)