Posts Tagged ‘Wikidata’



Scaling Wikidata: success means making the pie bigger

German summary: Wikidata wird größer und erfolgreicher. Im nächsten Jahr müssen wir Strategien und Werkzeuge entwickeln um Wikidata zu skalieren. In diesem Beitrag lege ich meine Überlegungen dazu dar.


 

Wikidata is becoming more successful every single day. Every single day we cover more topics and have more data about them. Every single day new people join our community. Every single day we provide more people with more access to more knowledge. This is amazing. But with any growth comes growing pains. We need to start thinking about them and build strategies for dealing with them.

Wikidata needs to scale in two ways: socially and technically. I will not go into the details of technical scaling here but instead focus on the social scaling. With social scaling I mean enabling all of us to deal with more attention, data and people around Wikidata. There are several key things that need to be in place to make this happen:

  • A welcome wagon and good documentation for newcomers to help them become part of the community and understand our shared norms, values, policies and traditions.
  • Good tools to help us maintain our data and find issues quickly and deal with them swiftly.
  • A shared understanding that providing high-quality data and knowledge is important.
  • Communication tools like the weekly summary and Project chat that help us keep everyone on the same page.
  • Structures that scale with enough people with advanced rights to not overwhelm and burn out any one of them.

We have all of these in place but all of them need more work from all of us to really prepare us for what is ahead over the next months and years.

One of the biggest pressures Wikidata is facing now is organisations wanting to push large amounts of data into Wikidata. This is great if it is done correctly and if it is data we truly care about. There are key criteria I think we should consider when accepting large data donations:

  • Is the data reliable, trustworthy, current and published somewhere referencable? We are a secondary database, meaning we state what other sources say.
  • Is the data going to be used? Data that is not used is exponentially harder to maintain because less people see it.
  • Is the organization providing the data going to help keep it in good shape? Or are other people willing to do it? Data donations need champions feeling responsible for making them a success in the long run.
  • Is it helping us fix an important gap or counter a bias we have in our knowledge base?
  • Is it improving existing topics more than adding new ones? We need to improve the depth of our data before we continue to expand its breadth.

So once we have this data how can we make sure it stays in good shape? Because one of the crucial points for scaling Wikidata is quality of and trust in the data on Wikidata. How can we ensure high quality of the data in Wikidata even on a large scale? The key pieces necessary to achieve this:

  • A community that cares about making sure the data we provide is correct, complete and up-to-date
  • Many eyes on the data
  • Tools that help maintenance
  • An understanding that we don’t have to have it all

Many eyes on the data. What does it mean? The idea is simple. The more people see and use the data the more people will be able to find mistakes and correct them. The more data from Wikidata is used the more people will get in contact with it and help keep it in good shape. More usage of Wikidata data in large Wikipedias is an obvious goal there. More and more infoboxes need to be migrated over the next year to make use of Wikidata. The development team will concentrate on making sure this is possible by removing big remaining blockers like support for quantities with units, access to data from arbitrary items as well as good examples and documentation. At the same time we need to work on improving the visibility of changes on Wikidata in the Wikipedia’s watchlists and recent changes. Just as important for getting more eyes on our data are 3rd-party users outside Wikimedia. Wikidata data is starting to be used all over the internet. It is being exposed to people even in unexpected places. What is of utmost importance in both cases is that it is easy for people to make and feed back changes to Wikidata. This will only work with well working feedback loops. We need to encourage 3rd-party users to be good players in our ecosystem and make this happen – also for their own benefit.

Tools that help maintenance. As we scale Wikidata we also need to provide more and better tools to find issues in the data and fix them. Making sure that the data is consistent with itself is the first step. A team of students is working with the development team now on improving the system for that. This will make it easy to spot people who’s date of birth is after their date of death and so on. The next step is checking against other databases and reporting mismatches. That is the other part of the student project. When you look at an item you should immediately see statements that are flagged as potentially problematic and review them. In addition more and more visualizations are being built that make it easy to spot outliers. One recent example is the Tree of Life.

An understanding that we don’t have to have it all. We should not aim to be the one and only place for structured open data on the web. We should strive to be a hub that covers important ground but also gives users the ability to find other more specialized sources. Our mission is to provide free access to knowledge for everyone. But we can do this just as well when we have pointers to other places where people can get this information. This is especially the case for niche topics and highly detailed data. We are a part of an ecosystem and we should help expand the pie for everyone by being a hub that points to all kinds of specialized databases. Why is this so important? We are part of a larger ecosystem. Success means making the pie bigger – not getting the whole pie for ourselves. We can’t do it all on our own.

If we keep all this in mind and preserve our welcoming culture we can continue to build something truly amazing and provide more people with more access to more knowledge every single day.

Improving the data quality and trust in the data we have will be a major development focus of the first months of 2015.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (11 Bewertungen, Durchschnitt: 4.73 von 5)
Loading...Loading...

Wikidata for Research – a grant proposal that anyone can edit

German summary: Vor einigen Wochen wurde an dieser Stelle von einer Initiative berichtet, im Rahmen derer Wikidata-Einträge für alle knapp 40.000 menschlichen Gene angelegt wurden. Hier nun baut Daniel Mietchen – Wissenschaftler am Museum für Naturkunde Berlin und aktiver Wikimedianer – auf dieser Idee auf und stellt einen europäischen Forschungsantrag zur Integration von Wikidata mit wissenschaftlichen Datenbanken vor, den jede und jeder via Wikidata editieren kann, ehe er in knapp sechs Wochen eingereicht wird.


A few weeks ago, this blog was enriched with a post entitled “Establishing Wikidata as the central hub for linked open life science data”. It introduced the Gene Wiki – a wiki-based collection of information related to human genes – and reported upon the creation of Wikidata items for all human genes, along with their annotation with statements imported from a number of scientific databases. The blog post mentioned plans to extend the approach to diseases and drugs, and a few weeks later (in the meantime, Wikidata had won an Open Data award), the underlying proposal for the grant that funds these activities was made public, followed by another proposal that involves Wikidata as a hub for metadata about audiovisual materials on scientific topics.

Now it’s time to take this one step further: we plan to draft a proposal that aims at establishing Wikidata as a central hub for linked open research data more generally, so that it can facilitate fruitful interactions at scale between professional research institutions and citizen science and knowledge initiatives. We plan to draft this proposal in public – you can join us and help develop it via a dedicated page on Wikidata.

The proposal – provisionally titled “Wikidata for research” – will be coordinated by the Museum für Naturkunde Berlin (for which I work), in close collaboration with Wikimedia Germany (which oversees development of Wikidata). A group of ca. 3-4 further partners are invited to join in, and you can help determine who these may be. Maastricht University has already signaled interest in covering data related to small molecules, and we are open to suggestions from any discipline, as long as there are relevant databases suitable for integration with Wikidata.

Two aspects – technical interoperability and community engagement – are the focus points of the proposal. In terms of the former, we are interested in external scientific databases providing information to Wikidata with an intention that both parties will be able to profit from this. Information may have the form of new items, new properties, or added statements to existing ones. One focus here would be on mapping identifiers that different databases use to describe related concepts, and on aligning controlled vocabularies built around that.

In terms of community engagement, the focus would be on the curation of Wikidata-based information, on syncing of curation with other databases (a prototype for that is in the making) and especially on the reuse of Wikidata-based information – ideally in ways not yet possible –  be it in the context of Wikimedia projects or research, or elsewhere.

Besides the Gene Wiki project, a number of other initiatives have been active at the interface between the Wikimedia and scholarly communities. Several of these have focused on curating scholarly databases, e.g. Rfam/Pfam and WikiPathways, which would thus seem like good candidates for extending the Gene Wiki’s Wikidata activities to other areas. There are also a wide range of Wikiprojects on scientific topics (including within the humanities), both on Wikidata and beyond. Some of them team up with scholarly societies (e.g. Biophysical Society or International Society for Computational Biology), journals (e.g. PLOS Computational Biology) or other organizations (e.g. CrossRef). In addition to all that, research about wikis is regularly monitored in the Research Newsletter.

The work on Wikidata – including contributions by the Gene Wiki project – is being performed by volunteers (directly or through semi-automatic tools), and the underlying software is open by default. Complementing such curation work, the Wikidata Toolkit has been developed as a framework to facilitate analysis of the data contained in Wikidata. The funding proposal for that is public too and was indeed written in the open. Outside Wikidata, the proposal for Wikimedia Commons as a central hub of multimedia from open-access sources is public, as is a similar one to establish Wikisource as a central hub for open-access literature (both of these received support from Wikimedia Germany).

While such openness is custom within the Wikimedia community – it contrasts sharply with current practice within the research community. As first calls for more transparency in research funding are emerging, the integration of Wikidata with research workflows seems like a good context to explore the potential of drafting a research proposal in public.

Like several other Wikimedia chapters, Wikimedia Germany has experience with participation in research projects (e.g. RENDER) but it is not in a position to lead such endeavours. The interactions with the research community have intensified over the last few years, e.g. through GLAM-Wiki activities, participation in the Leibniz research network Science 2.0, in a traveling science exhibition, or in events around open science. In parallel, the interest on the part of research institutions to engage with Wikimedia projects has grown, especially so for Wikidata.

One of these institutions is the Museum für Naturkunde Berlin, which has introduced Wikidata-related ideas into a number of research proposals already (no link here – all non-public). One of the largest research museums worldwide, it curates 30 million specimens and is active in digitization, database management, development of persistent identifiers, open-access publishing, semantic integration and public engagement with science. It is involved in a number of activities aimed at bringing biodiversity-related information together from separate sources and making them available in a way compatible with research workflows.

Increasingly, this includes efforts towards more openness. For instance, it participated in the Open Up! project that fed media on natural history into Europeana, in the Europeana Creative project that explores reuse scenarios of Europeana materials, and it leads the EU BON project focused at sharing biodiversity data. Within the framework of the pro-iBiosphere project, it was also one of the major drivers behind the launch of Bouchout Declaration for Open Biodiversity Knowledge Management, which brings the biodiversity research community together around principles of sharing and openness. Last but not least, the museum participated in the Coding da Vinci hackathon that brought together developers with data from heritage institutions.

As a target for submission of the proposal, we have chosen a call for the development of “e-infrastructures for virtual research environments”, issued by the European Commission. According to the call, “[t]hese virtual research environments (VRE) should integrate resources across all layers of the e-infrastructure (networking, computing, data, software, user interfaces), should foster cross-disciplinary data interoperability and should provide functions allowing data citation and promoting data sharing and trust.”

It is not hard to see how Wikidata could fit in there, nor that this still requires work. Considering that Wikidata is a global platform and that initial funding came mainly from the United States, it would be nice to see Europe taking its turn now. The modalities of this kind of EU funding are such that funds can only be provided to certain kinds of legal entities based in Europe, but we appreciate input from anywhere as to how the project should be shaped.

In order to ensure compatibility with both Wikidata and academic customs, all materials produced for this proposal shall be dual-licensed under CC BY-SA 3.0 and CC BY 4.0.

The submission deadline is very soon – on January 14, 2015, 17:00 Brussels time. Let’s find out what we can come up with by then – see you over there!

 

Written by Daniel Mietchen

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4.20 von 5)
Loading...Loading...

Zwei Jahre Wikidata: Eine Feier mit Geschenken und einem Preis

“Wikidata team and painting” – work of a member of the Wikidata team as part of his employment. Licensed under CC BY-SA 4.0 via Wikimedia Commons

Letzte Woche feierte Wikidata seinen zweiten Geburtstag. Mit Wikidata sammeln Menschen Daten über die Welt (z. B. Einwohnerzahlen oder Geburtsdaten) in strukturierter Form und in mehreren hundert Sprachen. Diese Daten werden genutzt, um Wikipedia und deren Schwesterprojekte zu verbessern. Sie stehen aber darüber hinaus Allen zur freien Nachnutzung zur Verfügung. Mehr als 16.000 Nutzer der Wikidata-Community haben seit dem Start über 12,8 Millionen Einträge angelegt und mit Daten gefüllt – ehrenamtlich und kollaborativ wie im Schwesterprojekt Wikipedia. Die Arbeiten für die Software hinter Wikidata wurden von Wikimedia Deutschland begonnen und als offene Software kontinuierlich weiterentwickelt. Wikidata hat sich in den letzten zwei Jahren zu einem der erfolgreichsten Wikimedia-Projekte entwickelt und liegt bei der Anzahl der aktiven Benutzerinnen und Benutzer vor vielen Sprachversionen der Wikipedia.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (8 Bewertungen, Durchschnitt: 5.00 von 5)
Loading...Loading...

Establishing Wikidata as the central hub for linked open life science data

German summary: Der wunderbaren Wikidata-Community ist es zu verdanken, dass jedes menschliche Gen (laut dem United States National Center for Biotechnology Information) jetzt durch einen Eintrag auf Wikidata repräsentiert wird. Benjamin Good, Andrew Su und Andra Waagmeester haben uns dankenswerterweise einen kurzen Bericht über ihre Arbeit mit Wikidata zur Verfügung gestellt.


Thanks to the amazing work of the Wikidata community, every human gene (according to the United States National Center for Biotechnology Information) now has a representative entity on Wikidata. We hope that these are the seeds for some amazing applications in biology and medicine. Here is a report from Benjamin Good, Andrew Su, and Andra Waagmeester on their work with Wikidata. Their work was supported by the National Institutes of Health under grant GM089820.

Graphical representation of the idealized human diploid karyotype, showing the organization of the genome into chromosomes. This drawing shows both the female (XX) and male (XY) versions of the 23rd chromosome pair. By Courtesy: National Human Genome Research Institute [Public domain], via Wikimedia Commons

The life sciences are awash in data.  There are countless databases that track information about human genes, mutations, drugs, diseases, etc.  This data needs to be integrated if it is to be used to produce new knowledge and thereby improve the human condition.  For more than a decade many different groups have proposed and many have implemented solutions to this challenge using standards and techniques from the Semantic Web.  Yet, today, the vast majority of biological data is still accessed from individual databases such as Entrez Gene that make no attempt to use any component of the Semantic Web or to otherwise participate in the Linked Open Data movement.  With a few notable exceptions, the data silos have only gotten larger and problems of fragmentation worse.

In parallel to the appearance of Big Data in biology (and elsewhere), Wikipedia has arisen as one of the most important sources of all information on the Web.  Within the context of Wikipedia, members of our research team have helped to foster the growth of a large collection of articles that describe the function and importance of human genes. Wikipedia and the subset of it that focuses on human genes (which we call the Gene Wiki), have flourished due to their centrality, the presence of the edit button, and the desire of the larger community to share knowledge openly.

Now, we are working to see if Wikidata can be the bridge between the open community-driven power of Wikipedia and the structured world of semantic data integration.  Can the presence of that edit button on a centralized knowledge base associated with Wikipedia help the semantic web break through into everyday use within our community?  The steps we are planning to take to test this idea within the context of the life sciences, are:

  1. Establishing bots that populate Wikidata with entities representative of three key classes: genes, diseases, and drugs.
  2. Expanding the scope of these bots to include the addition of statements that link these entities together into a valuable network of knowledge.
  3. Developing applications that display this information to the public that both encourage and enable them to contribute their knowledge back to Wikidata.  The first implementation will be to use the Wikidata information to enhance the articles in Wikipedia.

We are excited to announce that the first step on this path has been completed!

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (6 Bewertungen, Durchschnitt: 5.00 von 5)
Loading...Loading...

Transatlantische Arbeit an strukturierten Daten in Berlin

The English version of this post can be found here.

Letzte Woche hatte Wikimedia Deutschland Besuch zu einer ganz besonderen technischen Gesprächsrunde in der Berliner Geschäftsstelle. Mitglieder des Multimedia-Teams der Wikimedia Foundation in San Francisco, Entwicklerinnen und Entwickler für Wikidata bei Wikimedia Deutschland und Mitglieder der Freiwilligen-Community kamen dort zusammen, um Wikimedia Commons und strukturierte Daten zu besprechen.

Strukturierte Daten war in vielen technischen Gesprächen auf der diesjährigen Wikimania in London ein wichtiges Thema. Es handelt sich um das Prinzip hinter Wikidata — einer freien Wissensdatenbank, in der Daten gefiltert, sortiert und abgefragt werden können. Auch mit der Möglichkeit zur Bearbeitung durch Menschen und Maschinen geht es über die Speicherung von Wikitext in einer spezifischen menschlichen Sprache hinaus. Die Technik im Maschinenraum von Wikidata ist ein Projekt namens Wikibase, mit dem Daten strukturiert gespeichert werden können. Ideen, dass Wikimedia Commons, der freie Fundus an Mediendateien, von strukturierten Daten und dem Einsatz von Wikibase profitieren könnten, gab es schon seit geraumer Zeit, ebenso Überlegungen dazu, Commons einfacher in der Benutzung zu machen und die lizenzkonforme Nachnutzung von Bildern zu vereinfachen. Das einwöchige Meeting in Berlin brachte Wikimedianer von beiden Seiten des großen Teichs zusammen und markierte einen Startpunkt für den Planungs- und Diskussionsprozess.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (7 Bewertungen, Durchschnitt: 4.57 von 5)
Loading...Loading...

Transatlantic work on structured data in Berlin

Die deutsche Version dieses Beitrags findet sich hier.

Last week Wikimedia Deutschland was happy to welcome guests for a special technical discussion that spawned an entire week at the headquarters in Berlin. Members from the multimedia team of the Wikimedia Foundation in San Francisco, members from the team developing software for Wikidata at Wikimedia Deutschland and technical experts and developers from the volunteer community came together to discuss Wikimedia Commons and structured data.

Structured data was an important topic in many talks on technology at this year’s Wikimania in London. It is the principle behind Wikidata — a free knowledge base with data that can be filtered, sorted, queried, and of course edited by machines and human beings alike, all in a way that goes beyond storing wikitext in a specific human language. The technology in the engine room of Wikidata is a software project called Wikibase which stores data in a structured way. Ideas that Wikimedia Commons, the free repository of media files, could benefit from structured data and Wikibase have been floating around for a long time, as have thoughts about making Commons more user-friendly and make license-conforming re-use of pictures easier. The weeklong meeting in Berlin marked the starting point of a planning and discussion process that brought together Wikimedians from both sides of the pond.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 4.00 von 5)
Loading...Loading...

Der 4. Wikimedia-Salon – Was macht die Verdatung mit uns?

Unsere Daten sind sowieso verloren. Aber für die nächste Generation können wir etwas tun. Fukami

Am 2. Oktober wurde in der 4. Ausgabe von “Das ABC des Freien Wissens” diskutiert, welchen Einfluss Big Data auf verschiedene Lebensbereiche und vor allem auf den Datenschutz hat. Das Podium teilte sich in Optimisten und Skeptiker – zwei Pole, zwischen denen sich auch der öffentliche Diskurs bewegt.

Beim Plausch nach der Diskussion. By Agnieszka Krolik, CC-BY-SA-3.0, via Wikimedia Commons

  • Fukami (@fukami), IT-Sicherheitsexperte und Lobbyist, machte sich für den Schutz personenbezogener Daten stark. Er wies in seinem Eröffnungsvortrag darauf hin, dass der politische Aspekt von Datenverarbeitung und -sicherheit nicht neu sei. Bei der Volkszählung 1933 etwa wurde auf Lochkarten das Merkmal “Rasse” hinzugefügt, was zur Organisation der Deportationen genutzt wurde. Mit diesem drastischen Beispiel zeigte er auf, dass Datenaggregation im größeren Umfang immer dem Stärkeren helfe und im falschen Kontext  zur absoluten Katastrophe führen könne. Dennoch müssten Gesellschaften immer auch Risiken tragen, um sich entwickeln zu können. Die Krux dieses Risikomanagements in der IT sei jedoch, dass hier eben nicht die Fehlerverursacher, sondern Dritte betroffen sein könnten. Die Komplexität des sogenannten Third Party Risk Assessement in der aktuellen Datenschutzverordnung, die hier nur angerissen wurde, führte schließlich zum Plädoyer: Wer mit Sicherheitsvorfällen nicht umgehen kann, sollte auch keine personenbezogenen Daten verarbeiten.
  • Lukas F. Hartmann (@mntmn), Programmierer und Musiker, berichtete darüber, wie er denkbar persönlichste Daten, seine Gene, an die Firma 23andme schickte. Er bekam die Rückmeldung, dass er eine gefährliche Erbkrankheit in sich trage – was sich nach eigener Recherche als Fehler herausstellte. Trotz dieser Geschichte, die geeignet ist, Skepsis gegenüber Big Data zumindest im medizinischen Bereich zu schüren, steht Hartmann den Entwicklungen weiterhin optimistisch gegenüber. Wobei allein der Hinweis auf psychisch weniger stabile Menschen, die auf eine entsprechende Hiobsbotschaft dramatischer reagieren könnten, beängstigende Szenarien erzeugen kann. “Wir sehen das Licht!”, meinte Hartmann und plädierte für die aufklärerische und erkenntnisfördernde Qualität von Big Data.
  • Bastian Greshake (@gedankenstuecke), Mitgründer openSNP und Open Science-Aktivist, meinte, dass man mit solchen Ausnahmefällen kein Mitleid haben brauche. Auf Greshakes Plattform können Genproben eingereicht werden, die dann frei jedem zur Verfügung stehen. Was, wenn diese Daten in “falsche Hände” geraten? – Die Menschen wüssten schon, was sie tun, meint Greshake. Und vor dem Hintergrund, dass ´Big Data´ in einigen Monaten, wenn die Festplattenpreise sinken, sowieso nur noch ´Data´ sei, wäre ein Aufbegehren gegen die Entwicklung sowieso vergeblich. Aus Greshakes Sicht überwiegen die faszienierenden Ausblicke durch Big Data für die wissenschaftliche Entwicklung, die Aussicht auf Beschränkungen im Namen des Datenschutzes sieht er als Gefahr gerade für die medizinische Forschung. Bioethik sei zwar wichtig, aber bei ordentlicher Aufklärung könne man auf die Mündigkeit des Einzelnen vertrauen.

Die Diskussion mit dem Publikum drehte sich darum, was geschieht, wenn Datenmonopolisten die Deutungshoheit über bestimmte Aspekte gewinnen. Ein Stichwort hierfür lautet etwa Precrime, eine düstere Vision, an deren Verwirklichung gegenwärtig bereits gearbeitet wird. Kann uns Big Data aber vielleicht auch empathischer machen? Muss man also eher daran arbeiten, die Gesellschaft dahin zu bringen, dass Daten niemandem mehr schaden können? Die Bewertung von Big Data und ihren Effekten, das wurde klar, hängt auch von nichts weniger als dem grundlegenden Menschenbild ab, das man sich für die Zukunft wünscht und entwickelt.

Fotos der Veranstaltung

Preview: Beim nächsten “ABC des Freien Wissens” im November sind wir bei E wie “ERINNERUNG”.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (4 Bewertungen, Durchschnitt: 5.00 von 5)
Loading...Loading...

Outreach Program for Women at Wikidata

German summary: Im Mai beteiligte sich Wikidata am Outreach Program for Women. Helen Halbert und Anjali Sharma kümmerten sich um die Dokumentation von Wikidata für die Öffentlichkeit und Community, von Guided Tours, die an Wikidata heranführen bis zum Befüllen der Social-Media-Kanäle. Der folgende Gastbeitrag auf Englisch wurde von Helen (zusammen mit Anjali) nach ihrer Teilnahme an dem Programm bei uns verfasst.


This May, Wikidata was part of the Outreach Program for Women. Helen Halbert and Anjali Sharma took care of documenting Wikidata for the general public and the community, with tasks ranging from guided tours for those new to Wikidata to handling the various social media channels. The following guest post is a summary by Helen (written together with  Anjali) about her time with Wikidata.

The journey to contributor

This past May, Anjali and myself were thrilled to learn we both would be working for Wikidata for the summer as part of GNOME Foundation’s Outreach Program for Women (OPW), which provides paid internships with participating organizations to encourage more women to get involved with free and open source software. Both of us were assigned the task of working on outreach efforts.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (7 Bewertungen, Durchschnitt: 3.29 von 5)
Loading...Loading...

Why Wikidata is so important to Histropedia

The following is a guest post we received from our friends at the Histropedia project. We met at Wikimania 2014 in London and they told us how Wikidata is useful for them. Here is their write-up.

For those who don’t yet know; Histropedia is a project using Wikipedia and Wikidata to create the world’s first timeline of everything in history.
Earlier this year I wrote on the Histropedia blog about how important Wikidata is for our project. At the time we had just switched from trying to get dates from Wikipedia articles (from the infoboxes) to using Wikidata items. We had a reasonable amount of success with the infoboxes, but encountered some major limitations. Firstly we were only able to get dates precise to a year, and in some cases we were unable to recognise the date format used to even get the year. And of course there were the articles with no infobox.
By switching to Wikidata as the primary source for dates we immediately added over 700,000 date properties to our events, often to a much better precision than just years. This was incredibly important to the project as it not only greatly improved the accuracy of our timelines, but also allowed us to increase the available zoom levels. So now thanks to Wikidata we can zoom right in to see a day by day view of History. Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4.20 von 5)
Loading...Loading...

Wikidata at Wikimania 2014 in London

Die deutsche Version dieses Blogposts kann hier gelesen werden.

Wikidata was one of the dominating themes at Wikimania 2014. Many talks mentioned it in passing, even those that didn’t focus on technical topics. Structured data with Wikibase were a topic that was often talked about, be it in discussions on the future of Wikimedia Commons or in projects that do something with GLAM.

When it comes to Wikidata, more and more people are beginning to see the light, so to say. It was fitting that Lydia Pintscher’s talk on Wikidata used this metaphor for the projects: creating more dots of light on the map of free knowledge.

Another excellent talk on Wikidata was dedicated to the research around it. Markus Krötzsch took us on a journey through the data behind the free knowledge base that anyone can edit.

Of course, there were meetups by the Wikidata community and hacks were developed during the hackathon. One enthusiastically celebrated project came from the Russian Wikipedia. Russian Wikipedia had infoboxes that come from Wikidata for quite some while now. What they added at the hackathon was the ability to edit data in the columns of these infoboxes in place — and change it on Wikidata at the same time, pretty much like a visual editor for Wikidata. Read about their hack on Wikidata, or have a look at the source code (which is still a long way from being easy to adopt to other Wikipedias, but it’s a start).

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (4 Bewertungen, Durchschnitt: 5.00 von 5)
Loading...Loading...