Posts Tagged ‘english’



Pushing Wikidata to the next level

(Die deutsche Version dieses Beitrags ist hier.)

In early 2010 I met Denny and Markus for the first time in a small room at the Karlsruhe Institute of Technology to talk about Semantic MediaWiki, its development and its community. I was intrigued by the idea they’d been pushing for since 2005 – bringing structured data to Wikipedia. So when the time came to assemble the team for the development of Wikidata and Denny approached me to do community communications for it there was no way I could have said no. The project sounded amazing and the timing was perfect since I was about to finish my studies of computer science. In the one and a half years since then we have achieved something amazing. We’ve built a great technical base for Wikidata and much more importantly we’ve built an amazing community around it. We’ve built the foundation for something extraordinary. On a personal level I could never have dreamed where this one meeting in a small room in Karlsruhe has taken me now.

From now on I will be taking over product ownership of Wikidata as its product manager.

Up until today we’ve built the foundation for something extraordinary. But at the same time there are still a lot of things that need to be worked on by all of us together. The areas that we need to focus on now are:

  • Building trust in our data. The project is still young and the Wikipedia editors and others are still wary of using data from Wikidata on a large scale. We need to build tools and processes to make our data more trustworthy.
  • Improving the user experience around Wikidata. Building Wikidata to the point where it is today was a tremendous technical task that we achieved in a rather short time. This though meant that in places the user experience has not gotten as much attention. We need to make the experience of using Wikidata smoother.
  • Making Wikidata easier to understand. Wikidata is a very geeky and technical project. However to be truly successful it will need to be easy to get the ideas behind it.

These are crucial for Wikidata to have the impact we all want it to have. And we will all need to work on those – both in the development team and in the rest of the Wikidata community.

Let’s make Wikidata a joy to use and get it used in places and ways we can’t even imagine yet.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,20 von 5)
Loading...Loading...

Data for the people!

(Die deutsche Version dieses Beitrags ist hier.)

In the first session of the first Wikimania, I presented the idea of enriching Wikipedia with structured data. Asked how long it would take to implement this, I answered: “Two weeks, if you know the MediaWiki software well.”

That was in 2005. It turned out, I would be slightly off.


Now, in 2013, we finally started using structured data from Wikidata in the Wikipedias. The project is still in its infancy, but I am already extremely proud of the Wikidata team and what they have achieved. I am very thankful to the many, many people that helped us get to where we are today (I started listing them explicitly, but this post became too long). There are still many things that need to be done, but the rough sketch of what Wikidata is and is not has been drawn, and I think we have created a very interesting new project. I am confident enough about Wikidata and its future, or else I would not be leaving.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

A categorical imperative?

(Die deutsche Version dieses Artikels ist hier.)

This is the third in a short series of blog entries in which I explain some of the design decisions for Wikidata. The first one was about restricting property values or properties, the second about veracity and verifiability. The essays represent my personal opinion, and are not to be understood as the official opinion of the Wikidata project.

At first a name people doing knowledge representation care very, very strongly about: Barbara. Introduced about 2500 years ago by Aristotle (Teacher to Alexander the Great, who had conquered the known world and beyond by the age of 33. School and awesome teachers do matter!) and named a millennium later by my favorite philosopher, Boethius. (Seriously, this guy is awesome. He had everything you could have hoped for back in that time, and he lost it all. Read his bio. He had both his sons made consuls of the mightiest empire of the world, and then suddenly he got his riches taken, family members executed, and was awaiting his own execution in exile in a prison. Instead of lamenting, he sat down and wrote a book about what really is important in life. Read his Consolation of Philosophy. It remained on the bestselling list for a few centuries, not without a reason. Kings copied it by hand!) Barbara is part of the logical foundation of anything that has to do with classes. You might know classes as types, categories, genera, or anything else that is somehow taxonomical. Barbara is a type of syllogisms, thus a rule for correct reasoning. Modus Barbara states that if all A are B and all B are C, well then also all A are C. As an example: If we know that all billionaires are human, and we know that all humans are mortal, bang, all billionaires are mortal, too.
Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

Wikidata quality and quantity

One of the goals of the Wikidata development project is a community that is strong enough to maintain the content in Wikidata. The community is – as with all other Wikimedia projects – the only guarantee of quality and sustainability.

None of the objectives of the Wikidata development project is to be the largest collection of data on the net. The sheer number of statements in Wikidata is not a metric that is indicative of healthy growth or quality. Since it is an easy to get and understandable number it is nontheless used a lot, but we should not attach too much importance to it.

This leads to the question, which metrics are meaningful for quality in Wikidata? And I have to admit: we do not know. This may seem particularly ironic since my dissertation was on the topic of quality measurement of knowledge structures. But it is not surprising: the opportunity to make statements in Wikidata exists since about half a year. The site is in continuous development, and some important pieces for quality assurance that are planned for Wikidata are not yet developed – including, for example, ranks for statements, web links as a data type, the protection of individual statements and aggregated views of the data. How to make quality measurable in Wikidata, which metrics correlate with quality – it has simply not yet been investigated sufficiently. I expect that science will provide some answers in the coming months and years.

To get an overview of the development of Wikidata, we must temporarily take assumptions about what numbers likely indicate quality. I do hereby call the community to make suggestions and discuss. A few first thoughts below.

The number of data elements (items) seems to not be a useful measure. This number is so far almost exclusively characterized in that items are required for the storage of language links. Accordingly, there was initially strong growth, while the links were transferred, and in recent months, the number is relatively stable.

The number of edits per page seems to be more meaningful. Last week it went above 5.0 and is rising quickly. The number of edits alone in Wikidata is less meaningful than in many other Wikimedia projects as an extraordinarily high proportion of the edits are done by bots. Bots are programs written by users to automatically or semi-automatically make changes. The bots are controlled by a group of about 80 users. This leads many to the idea that Wikidata is only written by bots. But that’s not true: every month 600000-1 million edits are performed by human user. These are numbers that can be reached only by the most active Wikipedias – including their own bot edits. Worries about Wkidata’s growth being too fast and that the quality of the data would suffer, have so far, except for anecdotes, not proven true.

Perhaps the simplest metric is the number of active users. Active users in Wikimedia projects are defined as the users who contributed at least five edits in a given month. Wikidata has nearly 4,000 active users, making it rank 6th among the most active of the Wikimedia projects together with the Japanese and Russian Wikipedia behind only the English Wikipedia, Commons, the German, French and Spanish Wikipedia. In other words, Wikidata has more active users than 100 smaller Wikipedias combined! Whenever the smaller Wikipedias access Wikidata, they rely on a knowledge base that is maintained by a much larger community than their own Wikipedia. But the advantages don’t end there: by using the content of Wikidata in the Wikipedias it becomes more visible, gets more attention, and errors are more likely to be found (although we still lack the technical means to then correct the error easily from Wikipedia – but that is on the development plan). This mainly benefits the smaller Wikipedias.

But it also already has useful advantages for the larger Wikipedias: An exciting – and for me completely unexpected – opportunity for quality assurance came when the English Wikipedia decided not to simply take IMDB IDs from Wikidata but instead load them from Wikidata to compare them with the existing numbers in Wikipedia, and in the case of inconsistency to add a hidden category to the article. This way difficult to detect errors and easily vandalisable data got an additional safety net: it may well be that you have a typo in the number on the English Wikipedia, or some especially funny person switched the ID for Hannah Montana’s latest film with that ofNatural Born Killers in the French Wikipedia – but now these situations are detected quickly and automatically. This data that is validated in several ways can then be used by the smaller Wikipedias with little concern.

As mentioned earlier, a lot is still missing and Wikidata is a very young project. Many of the statements in Wikidata are without a source. Even in the German Wikipedia the statement, Paris is the capital of France, does not have a source. We impose much stricter rules on a much smaller project after such a short time? But, then one may interject, if a statement has no source, I can not use it in my Wikipedia. And that is perfectly okay: it is already possible now, to just use data from Wikidata if they have a source of a certain type.

There are two ways to ensure the long term quality of Wikipedia: Allow user to be more effective or attract more users. We should continue to pursue both ways and Wikidata uses both ways very effectively: the mechanisms described above aim to give users the means to make more powerful tools and processes to build quality assurance, simultaneously Wikidata has already brought more than 1300 new users to the Wikimedia projects who had not edited in the other Wikimedia projects before.

Wikidatas main goal is to support the Wikimedia projects: it should enable higher quality of the content and reduce the effort required for the same. We need more metrics that capture this goal, and show how we evolve. The simple metrics all indicate that the initial growth in width has come to an end after months, and that the project is gaining in depth and quality. There are useful applications both for small as well as for large projects. But it is also clear that I am an avid supporter of Wikidata and so have a bias, and therefore start a call for ideas to track Wikidata’s effect critically and accurately.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

Wikidata and other technical bits at Wikimania

Denny, Lydia and Daniel (by Fabrice Florin, CC-by-sa 2.0)

I’m back from an amazing Wikimania. First of all thank you to everyone who helped make the event happen. It was very well organized and an overall useful and productive event. I was there to discuss everything Wikidata as well as new technology like the Visual Editor and Flow and how they affect the German language Wikipedia.

It felt like Wikidata and the Visual Editor were on everyone’s mind during this Wikimania. No matter which talk or panel or dinner I went to – every single one of them mentioned Wikidata and the Visual Editor in some way. It’s great to see the Wikimedia community embrace Wikidata as its sister project. And the VisualEditor – while still rough – it seems is getting to that point very quickly too.
Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,80 von 5)
Loading...Loading...

On truths and lies

(Die deutsche Version dieses Artikels ist hier.)

This is the second in a short series of blog entries in which I explain some of the design decisions behind Wikidata. The first one was about restricting property values or properties. The essays represent my personal opinion, and are not to be understood as the official opinion of the Wikidata project.

Databases have an aura of correctness. When we query a database, we expect the result that comes back to basically be The Answer and The Truth. Ask Amazon’s database about the author of the Bible. Ask IMDB about the director of Adaptation. You are not expecting to get a possible answer, or different points of view – you expect one definitive answer.

Wikidata is collecting structured data about the world. It is basically a crowdsourced database. Unlike text, structured data necessarily and unfortunately lacks in nuance. Whereas it is possible to talk about the statehood of Kosovo in an NPOV way in natural language, a naive approach to represent that in structured data would fail: either we say Kosovo is a state, or we do not. There are no shades of grey.

Fortunately some of the roots of Wikidata lie in an EU research project called RENDER. The goal of this project is to explore and support the diversity of knowledge on the Web. RENDER discards the assumption of a simple, single truth – and this was inherited by the Wikidata data model. Instead of collecting facts, we collect statements. We define statements as claims that can have references. A reference supports the claim. A beautiful example is for example Ethanol, where the CAS number – a standard identifier for chemical compounds – is given with a reference to the actual standard, pointing out the page in the source.

Unlike many other databases, Wikidata can contain contradicting statements, supported by different references. Unlike the natural text in Wikipedia, Wikidata does not offer the possibility to reconcile and explain the differences in prose, providing due weight to the different points of view. The responsibility lies with the Wikidata reader and reuser to deal with deciding which sources to trust. I expect quite a bit of research and exploration to deal with this question in the following years. The first reusers to deal with these issues will be the Wikipedia communities that opt to choose data from Wikidata.

In the next few weeks and months we will add a few more features to support the diversity of statements in Wikidata.

Currently, the most obvious omission is a lack of datatypes to specify numbers, text and URLs. Only with these datatypes it will be possible to actually write down references in their full glory. Another opportunity – once URLs are available – would be to provide content locators for text in HTML pages through XPath, oxPath, CSS selectors, or something similar, thus enabling bots to check if the given references are still valid. I am very curious to see how the usage of references and sources will develop in and around Wikidata.

Another major feature that will be introduced in the course of this year is the possibility to rank statements: not all statements are to be regarded equally. We will introduce three ranks, and every statement will be in one of them: preferred, normal, and deprecated.

“Preferred” statements should be the most current and most widely accepted statements. There can be several preferred statements for the same item and property.

“Deprecated” statements are those that are considered to be not reliable for some reason. They are mentioned though because they might have a strong source supporting it, or they are widely spread for some reason, but actually not accepted anymore. Examples can include typos from influential textbooks – for example regarding the iron content of spinach, or the length of the Rhine – or numbers spread by some form of propaganda that are considered not correct today anymore.

“Normal” statements are thus the ones left, which are neither “preferred” nor “deprecated”. This will often apply for historic statements (the population of Rome in the time of Julius Caesar, former capitals of Russia, etc.).

Technically, we will start with using only preferred statements for answering queries (i.e. when you ask for all capitals with a population of less than 500,000, then you won’t get answers where the city had a population of 120,000 in the 16th century). Also only they will be returned by the property-parserfunction. The Lua interface will have access to all statements and thus provide full flexibility. It is planned to extend query answering later to support more complex queries, at which point we will have to think about integrating other ranks.

The ranks should allow for a more inclusive policy in Wikidata, allowing to reflect a wider diversity of knowledge.

To give an idea of the time scale: we will first implement the datatypes that are still missing, and then, as a prerequisite for ranks, the possibility to reorder statements. After that, ranks will be the next feature to land in Wikidata.

Ranks introduce a vector for debate, which has not been there in Wikidata yet. The question moves from “should this statement be included?” to “what should be the rank of this statement?” This seems like a necessary step: unlike natural text, Wikidata otherwise could not include statements that are agreed on to be bogus but that have historical or other value. This makes it even more important to remember that Wikidata is not about truth, but about collecting referenced statements in a secondary database. The criterion for inclusion should not be veracity, but verifiability – a policy that has served Wikipedia very well.

Wikidata will always – and that is both a necessity as well as acknowledged by design – run short of Wikipedia in many aspects. Wikipedia articles can explore causal and informal connections, they can inspire curiosity, and they can support one of the major modes of knowledge transfer between humans: storytelling. Wikidata has other, unique advantages: it can provide some ground data about a topic of interest in many languages more easily, and it provides the data in a way that is much more accessible for bots and apps. It could be a step towards relieving some Wikipedias from a lot of bot-created articles, never touched by a human editor, cluttering recent changes, and skewed statistics.

Without the ability to express a plurality of statements about an item – even if they are considered truths only by some and lies by others – Wikidata would fall short of one of the major pillars of Wikipedia, the Neutral Point of View and the possibility of integrating conflicting points of view.

I hope that the technical platform that we as developers are building, and the rules and processes of the communities in Wikidata, the Wikipedias, and other Wikimedia projects, are establishing a useful ecosystem, understanding the limitations of each project, and discovering how we can most effectively help each other. And this means understanding the peculiar relationship between Wikidata and the Truth.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

The Wikidata tool ecosystem

(Die deutsche Version dieses Artikels ist hier.)

The following is a guest post by Magnus Manske, active tool developer around Wikidata and author of the software that later evolved into MediaWiki.

Wikidata is the youngest child of the Wikimedia family. Its main purpose is to serve as a “Commons for factoids”, a central repository for key data about the topics on, and links between, the hundreds of language editions of Wikipedia. At time of writing, Wikidata already contains about 10 million items, more than any edition of Wikipedia (English Wikipedia currently has 4.2 million entries). But while, as with Commons, its central purpose is to serve Wikipedia and its sister projects, Wikidata has significant value beyond that; namely, it offers machine-readable, interlinked data about millions of topics in many languages via a standardized interface (API).

Such a structured data repository has long been a “holy grail” in computer science, since the humble beginnings of research into artificial intelligence, to current applications like Google’s Knowledge Graph and Wolfram Alpha, and towards future systems like “intelligent” user agents or (who knows?) the Singularity.

The scale of any such data collection is a daunting one, and while some companies can afford to pour money into it, other groups, such as DBpedia, have tried to harvest the free-form data stored in Wikipedia. However, Wikidata’s mixture of human and bot editing, the knowledge of Wikipedia as a resource, and evolving features such as multiple property types, source annotation, and qualifiers add a new quality to the web of knowledge, and several tools have already sprung up to take advantage of these, and to demonstrate its potential. A fairly complete list is available.

Views on Wikidata


Family tree of Johann Seabastian Bach

For a straight-forward example of such a tool, have a look at Mozart. This tool does not merely pull and display data about an item; it “understands” that this item is a person, and queries additional, person-specific items, such as relatives. It also shows person-specific information that does not refer to other items, such as Authority Control data. Mozart’s compositions are listed, and can be played right on the page, if a file exists on Commons. To a degree, it can also use the language information in Wikidata, so you can request the same page in German (mostly).

Instead of looking only for direct relatives, a tool can also follow a “chain” of certain properties between items, and retrieve an “item cluster”, such as a genealogical tree (pretty and heavy-duty tree for Mozart). The Wikidata family tree around John F. Kennedy contains over 10.000 people at time of writing. In similar fashion, a tool can follow taxonomic connections between species up to their taxonomic roots, and generate an entire tree of life (warning: huge page!).

These tools demonstrate that even in its early stages, Wikidata allows to generate complex results with a fairly moderate amount of programming involved. For a more futuristic demo, talk to Wiri (Google Chrome recommended).

Edit this item

Unsurprisingly to anyone who has volunteered on Wikimedia projects before, tools to help with editing are also emerging. Some have the dual function of interrogating Wikidata and displaying results, while at the same time informing about “things to do”. If you look at the genre of television series on Wikidata, you will notice that over half of them have no genre assigned. (Hint: Click on the “piece of pie” in the pie chart to see the items. Can you assign a genre to Lost?).

When editing Wikidata, one usually links to an item by looking for its name. Bad luck if you look for “John Taylor”, for there are currently 52 items with that name but no discerning description. If you want to find all items that use the same term, try the Terminator; it also has (daily updated) lists with items that have the same title but no description.

Similarly, you can look for items by Wikipedia category. If you want some more complex filter, or want to write your own tool and look for something to ease your workload, there is a tool that can find, say, Operas without a librettist (you will need to edit the URL to change the query, though).

There are also many JavaScript-based tools that work directly on Wikidata. A single click to import all language links or species taxonomy from Wikipedia, find authority control data, declare the current item to be a female football player from Bosnia, or apply the properties of the current item to all items in the same Wikipedia category — tools for all of these exist.

This is only the beginning

While most of these tools are little more than demos, or primarily serve Wikidata and its editors, they nicely showcase the potential of the project. There might not be much you can learn about Archduke Ernest of Austria from Wikidata, but it is more than you would get on English Wikipedia (no article). It might be enough information to write a stub article. And with more statements being added, more property types (dates, locations) emerging, and more powerful ways to query Wikidata, I am certain we will see many, and even more amazing tools being written in the near future. Unless the Singularity writes them for us.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

Wikidata all around the world

(Die deutsche Version dieses Artikels ist hier)

Since one month 11 Wikipedias have the ability to include data from Wikidata in their articles. Two days ago English Wikipedia was added to that group. Today the remaining 274 are joining. Usage examples are in the last blog entry. There is also an FAQ for this deployment.

This is a huge step for Wikidata and at the same time also another beginning. It’s a huge step because from now on all Wikipedias are able to collect, curate and use data together. For example every Wikipedia can query the ID of a movie on the Internet Movie Database and use it in their article as soon as someone added it to Wikidata. At the same time it is a beginning because there is still a lot to do. Accessing the data has to be made easier. More data has to be added to Wikidata (and translated where necessary). More sources have to be added to existing claims. More data types need to be made available – for example geocoordinates and time. Your help and your Feedback is very welcome and important there.

We’re looking forward to the next steps!

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (1 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

And that makes 12

(Die deutsche Version dieses Beitrags gibt es hier.)

Today the English Wikipedia got the ability to include data from Wikidata. Four weeks ago the first 11 Wikipedias started testing this second phase of the project. This means by now 12 Wikipedias can make use of the shared data in their infoboxes for example. The available data includes things like conservation status for a species, ISBN for a book or the top level domain of a country.

A Request for Comments about how to use data from Wikidata is currently ongoing. Until the Request for Comments is closed you can continue to try it out on test2.wikipedia.org.

There are two ways to access the data:

  • Use a parser function like {{#property:p159}} in the wiki text of the article on the Wikimedia Foundation. This will return “San Francisco” as that is the current headquarters location of the non-profit.
  • For more complicated things you can use Lua. The documentation for this is here.

We are working on expanding the parser function so you can for example use {{#property:headquarters location}} instead of {{#property:p159}}. The complete plan for this is here.

The next step is the deployment on all remaining 274 Wikipedias. If there are no issues they will follow on Wednesday.

There is an FAQ for this deployment. Please help us with testing and feedback. The best place to leave feedback is this discussion page.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...Loading...

You can have all the data!

(Die deutsche Version dieses Beitrags gibt es hier.)

Today the first 11 Wikipedias got the ability to include data from Wikidata in their articles. These are the Italian, Hebrew, Hungarian, Russian, Turkish, Ukrainian, Uzbek, Croatian, Bosnian, Serbian and Serbo-Croatian Wikipedias. If you are curious you can also try it out on test2.wikipedia.org. This means the editors on these Wikipedias are now able to make use of the growing amount of structured data that is available in Wikidata as a common dataset. It includes things like conservation status for a species, ISBN for a book or the top level domain of a country.

There are two ways to access the data:

  • Use a parser function like {{#property:p169}} in the wiki text of the article on Yahoo!. This will return “Marissa Mayer” as she is the chief executive officer of the company.
  • For more complicated things you can use Lua. The documentation for this is here.

We are working on expanding the parser function so you can for example use {{#property:chief executive officer}} instead of {{#property:p169}}. The complete plan for this is here.

The next step is the deployment on the other Wikipedias. We will carefully monitor performance and if there are no issues they will follow within a week or two.

We have prepared an FAQ for this deployment and are looking forward to your testing and feedback. The best place to leave feedback is this discussion page.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (4 Bewertungen, Durchschnitt: 4,50 von 5)
Loading...Loading...