Posts Tagged ‘english’



Wikidata for Research – a grant proposal that anyone can edit

German summary: Vor einigen Wochen wurde an dieser Stelle von einer Initiative berichtet, im Rahmen derer Wikidata-Einträge für alle knapp 40.000 menschlichen Gene angelegt wurden. Hier nun baut Daniel Mietchen – Wissenschaftler am Museum für Naturkunde Berlin und aktiver Wikimedianer – auf dieser Idee auf und stellt einen europäischen Forschungsantrag zur Integration von Wikidata mit wissenschaftlichen Datenbanken vor, den jede und jeder via Wikidata editieren kann, ehe er in knapp sechs Wochen eingereicht wird.


A few weeks ago, this blog was enriched with a post entitled “Establishing Wikidata as the central hub for linked open life science data”. It introduced the Gene Wiki – a wiki-based collection of information related to human genes – and reported upon the creation of Wikidata items for all human genes, along with their annotation with statements imported from a number of scientific databases. The blog post mentioned plans to extend the approach to diseases and drugs, and a few weeks later (in the meantime, Wikidata had won an Open Data award), the underlying proposal for the grant that funds these activities was made public, followed by another proposal that involves Wikidata as a hub for metadata about audiovisual materials on scientific topics.

Now it’s time to take this one step further: we plan to draft a proposal that aims at establishing Wikidata as a central hub for linked open research data more generally, so that it can facilitate fruitful interactions at scale between professional research institutions and citizen science and knowledge initiatives. We plan to draft this proposal in public – you can join us and help develop it via a dedicated page on Wikidata.

The proposal – provisionally titled “Wikidata for research” – will be coordinated by the Museum für Naturkunde Berlin (for which I work), in close collaboration with Wikimedia Germany (which oversees development of Wikidata). A group of ca. 3-4 further partners are invited to join in, and you can help determine who these may be. Maastricht University has already signaled interest in covering data related to small molecules, and we are open to suggestions from any discipline, as long as there are relevant databases suitable for integration with Wikidata.

Two aspects – technical interoperability and community engagement – are the focus points of the proposal. In terms of the former, we are interested in external scientific databases providing information to Wikidata with an intention that both parties will be able to profit from this. Information may have the form of new items, new properties, or added statements to existing ones. One focus here would be on mapping identifiers that different databases use to describe related concepts, and on aligning controlled vocabularies built around that.

In terms of community engagement, the focus would be on the curation of Wikidata-based information, on syncing of curation with other databases (a prototype for that is in the making) and especially on the reuse of Wikidata-based information – ideally in ways not yet possible –  be it in the context of Wikimedia projects or research, or elsewhere.

Besides the Gene Wiki project, a number of other initiatives have been active at the interface between the Wikimedia and scholarly communities. Several of these have focused on curating scholarly databases, e.g. Rfam/Pfam and WikiPathways, which would thus seem like good candidates for extending the Gene Wiki’s Wikidata activities to other areas. There are also a wide range of Wikiprojects on scientific topics (including within the humanities), both on Wikidata and beyond. Some of them team up with scholarly societies (e.g. Biophysical Society or International Society for Computational Biology), journals (e.g. PLOS Computational Biology) or other organizations (e.g. CrossRef). In addition to all that, research about wikis is regularly monitored in the Research Newsletter.

The work on Wikidata – including contributions by the Gene Wiki project – is being performed by volunteers (directly or through semi-automatic tools), and the underlying software is open by default. Complementing such curation work, the Wikidata Toolkit has been developed as a framework to facilitate analysis of the data contained in Wikidata. The funding proposal for that is public too and was indeed written in the open. Outside Wikidata, the proposal for Wikimedia Commons as a central hub of multimedia from open-access sources is public, as is a similar one to establish Wikisource as a central hub for open-access literature (both of these received support from Wikimedia Germany).

While such openness is custom within the Wikimedia community – it contrasts sharply with current practice within the research community. As first calls for more transparency in research funding are emerging, the integration of Wikidata with research workflows seems like a good context to explore the potential of drafting a research proposal in public.

Like several other Wikimedia chapters, Wikimedia Germany has experience with participation in research projects (e.g. RENDER) but it is not in a position to lead such endeavours. The interactions with the research community have intensified over the last few years, e.g. through GLAM-Wiki activities, participation in the Leibniz research network Science 2.0, in a traveling science exhibition, or in events around open science. In parallel, the interest on the part of research institutions to engage with Wikimedia projects has grown, especially so for Wikidata.

One of these institutions is the Museum für Naturkunde Berlin, which has introduced Wikidata-related ideas into a number of research proposals already (no link here – all non-public). One of the largest research museums worldwide, it curates 30 million specimens and is active in digitization, database management, development of persistent identifiers, open-access publishing, semantic integration and public engagement with science. It is involved in a number of activities aimed at bringing biodiversity-related information together from separate sources and making them available in a way compatible with research workflows.

Increasingly, this includes efforts towards more openness. For instance, it participated in the Open Up! project that fed media on natural history into Europeana, in the Europeana Creative project that explores reuse scenarios of Europeana materials, and it leads the EU BON project focused at sharing biodiversity data. Within the framework of the pro-iBiosphere project, it was also one of the major drivers behind the launch of Bouchout Declaration for Open Biodiversity Knowledge Management, which brings the biodiversity research community together around principles of sharing and openness. Last but not least, the museum participated in the Coding da Vinci hackathon that brought together developers with data from heritage institutions.

As a target for submission of the proposal, we have chosen a call for the development of “e-infrastructures for virtual research environments”, issued by the European Commission. According to the call, “[t]hese virtual research environments (VRE) should integrate resources across all layers of the e-infrastructure (networking, computing, data, software, user interfaces), should foster cross-disciplinary data interoperability and should provide functions allowing data citation and promoting data sharing and trust.”

It is not hard to see how Wikidata could fit in there, nor that this still requires work. Considering that Wikidata is a global platform and that initial funding came mainly from the United States, it would be nice to see Europe taking its turn now. The modalities of this kind of EU funding are such that funds can only be provided to certain kinds of legal entities based in Europe, but we appreciate input from anywhere as to how the project should be shaped.

In order to ensure compatibility with both Wikidata and academic customs, all materials produced for this proposal shall be dual-licensed under CC BY-SA 3.0 and CC BY 4.0.

The submission deadline is very soon – on January 14, 2015, 17:00 Brussels time. Let’s find out what we can come up with by then – see you over there!

 

Written by Daniel Mietchen

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,20 von 5)
Loading...

Establishing Wikidata as the central hub for linked open life science data

German summary: Der wunderbaren Wikidata-Community ist es zu verdanken, dass jedes menschliche Gen (laut dem United States National Center for Biotechnology Information) jetzt durch einen Eintrag auf Wikidata repräsentiert wird. Benjamin Good, Andrew Su und Andra Waagmeester haben uns dankenswerterweise einen kurzen Bericht über ihre Arbeit mit Wikidata zur Verfügung gestellt.


Thanks to the amazing work of the Wikidata community, every human gene (according to the United States National Center for Biotechnology Information) now has a representative entity on Wikidata. We hope that these are the seeds for some amazing applications in biology and medicine. Here is a report from Benjamin Good, Andrew Su, and Andra Waagmeester on their work with Wikidata. Their work was supported by the National Institutes of Health under grant GM089820.

Graphical representation of the idealized human diploid karyotype, showing the organization of the genome into chromosomes. This drawing shows both the female (XX) and male (XY) versions of the 23rd chromosome pair. By Courtesy: National Human Genome Research Institute [Public domain], via Wikimedia Commons

The life sciences are awash in data.  There are countless databases that track information about human genes, mutations, drugs, diseases, etc.  This data needs to be integrated if it is to be used to produce new knowledge and thereby improve the human condition.  For more than a decade many different groups have proposed and many have implemented solutions to this challenge using standards and techniques from the Semantic Web.  Yet, today, the vast majority of biological data is still accessed from individual databases such as Entrez Gene that make no attempt to use any component of the Semantic Web or to otherwise participate in the Linked Open Data movement.  With a few notable exceptions, the data silos have only gotten larger and problems of fragmentation worse.

In parallel to the appearance of Big Data in biology (and elsewhere), Wikipedia has arisen as one of the most important sources of all information on the Web.  Within the context of Wikipedia, members of our research team have helped to foster the growth of a large collection of articles that describe the function and importance of human genes. Wikipedia and the subset of it that focuses on human genes (which we call the Gene Wiki), have flourished due to their centrality, the presence of the edit button, and the desire of the larger community to share knowledge openly.

Now, we are working to see if Wikidata can be the bridge between the open community-driven power of Wikipedia and the structured world of semantic data integration.  Can the presence of that edit button on a centralized knowledge base associated with Wikipedia help the semantic web break through into everyday use within our community?  The steps we are planning to take to test this idea within the context of the life sciences, are:

  1. Establishing bots that populate Wikidata with entities representative of three key classes: genes, diseases, and drugs.
  2. Expanding the scope of these bots to include the addition of statements that link these entities together into a valuable network of knowledge.
  3. Developing applications that display this information to the public that both encourage and enable them to contribute their knowledge back to Wikidata.  The first implementation will be to use the Wikidata information to enhance the articles in Wikipedia.

We are excited to announce that the first step on this path has been completed!

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (6 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...

Transatlantic work on structured data in Berlin

Die deutsche Version dieses Beitrags findet sich hier.

Last week Wikimedia Deutschland was happy to welcome guests for a special technical discussion that spawned an entire week at the headquarters in Berlin. Members from the multimedia team of the Wikimedia Foundation in San Francisco, members from the team developing software for Wikidata at Wikimedia Deutschland and technical experts and developers from the volunteer community came together to discuss Wikimedia Commons and structured data.

Structured data was an important topic in many talks on technology at this year’s Wikimania in London. It is the principle behind Wikidata — a free knowledge base with data that can be filtered, sorted, queried, and of course edited by machines and human beings alike, all in a way that goes beyond storing wikitext in a specific human language. The technology in the engine room of Wikidata is a software project called Wikibase which stores data in a structured way. Ideas that Wikimedia Commons, the free repository of media files, could benefit from structured data and Wikibase have been floating around for a long time, as have thoughts about making Commons more user-friendly and make license-conforming re-use of pictures easier. The weeklong meeting in Berlin marked the starting point of a planning and discussion process that brought together Wikimedians from both sides of the pond.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 4,00 von 5)
Loading...

Outreach Program for Women at Wikidata

German summary: Im Mai beteiligte sich Wikidata am Outreach Program for Women. Helen Halbert und Anjali Sharma kümmerten sich um die Dokumentation von Wikidata für die Öffentlichkeit und Community, von Guided Tours, die an Wikidata heranführen bis zum Befüllen der Social-Media-Kanäle. Der folgende Gastbeitrag auf Englisch wurde von Helen (zusammen mit Anjali) nach ihrer Teilnahme an dem Programm bei uns verfasst.


This May, Wikidata was part of the Outreach Program for Women. Helen Halbert and Anjali Sharma took care of documenting Wikidata for the general public and the community, with tasks ranging from guided tours for those new to Wikidata to handling the various social media channels. The following guest post is a summary by Helen (written together with  Anjali) about her time with Wikidata.

The journey to contributor

This past May, Anjali and myself were thrilled to learn we both would be working for Wikidata for the summer as part of GNOME Foundation’s Outreach Program for Women (OPW), which provides paid internships with participating organizations to encourage more women to get involved with free and open source software. Both of us were assigned the task of working on outreach efforts.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (7 Bewertungen, Durchschnitt: 3,29 von 5)
Loading...

Wikidata at Wikimania 2014 in London

Die deutsche Version dieses Blogposts kann hier gelesen werden.

Wikidata was one of the dominating themes at Wikimania 2014. Many talks mentioned it in passing, even those that didn’t focus on technical topics. Structured data with Wikibase were a topic that was often talked about, be it in discussions on the future of Wikimedia Commons or in projects that do something with GLAM.

When it comes to Wikidata, more and more people are beginning to see the light, so to say. It was fitting that Lydia Pintscher’s talk on Wikidata used this metaphor for the projects: creating more dots of light on the map of free knowledge.

Another excellent talk on Wikidata was dedicated to the research around it. Markus Krötzsch took us on a journey through the data behind the free knowledge base that anyone can edit.

Of course, there were meetups by the Wikidata community and hacks were developed during the hackathon. One enthusiastically celebrated project came from the Russian Wikipedia. Russian Wikipedia had infoboxes that come from Wikidata for quite some while now. What they added at the hackathon was the ability to edit data in the columns of these infoboxes in place — and change it on Wikidata at the same time, pretty much like a visual editor for Wikidata. Read about their hack on Wikidata, or have a look at the source code (which is still a long way from being easy to adopt to other Wikipedias, but it’s a start).

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (4 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...

Guided tours and Wikidata: How to explain a complex project and encourage new editors

The following is a contribution by Bene*, admin and bureaucrat on Wikidata and author of the guided tours on Wikidata. He explains the motovation behind guided tours and how they can attract new editors to the Wikidata community:

Wikidata is no longer a brand new project but still a lot of people do not really know what it actually does. This makes it hard for new editors to get involved with the project and become active contributors. We realized that something had to change; that we had to make things easier to understand and take our newbies by the hand.

Wikidata guided tour intro
Wikidata guided tour intro

Wikidata guided tour labels
Wikidata guided tour labels

 

When it comes to planning how to help new editors, a first approach is typically to create help pages for individual topics. However, these pages are often very long and do not do a good job of explaining concepts beyond their theoretical context. Another way to explain things is to create illustrative presentations including slideshows. Unfortunately, the users still only get the theory and have to make the leap from reading to actually editing on their own. Keeping all this in mind, we decided that we needed a format that is integrated with the editing interface of Wikidata and gives users the opportunity to edit content through a series of practical exercises.
In fact, this is exactly what the GuidedTour extension does. It provides a way to create presentations, or rather interactive tutorials, in which the user can actually complete a set of actions. One great use case of Guided Tours is the Wikipedia Adventure. However, for Wikidata we needed something different because the item editing interface shares very little in common with a standard wiki page. The pages contain more buttons and small text fields because an item does not simply consist of text but stores structured data instead. Therefore, we adjusted the guided tours to our needs and added an overlay feature to highlight single design elements. We also made the tours translatable as Wikidata is a multilingual project. If you are interested in the result just try it out for yourself: there are currently two Wikidata tours available—one on items, and one on statements.

Wikidata items tour stats
Wikidata items tour stats

Wikidata statements tour stats
Wikidata statements tour stats

As you can see from the usage statistics, the work was well worth the effort. Since the release on 11th July more than 150 users have taken the first tour and more than 100 went on to complete the second one. This shows the impact our tours have had and the great need for them. It was lots of fun to create and implement the interactive tutorials but there is still a lot of work to do. New tours are being worked on and the existing ones are also in need of translations. If you have any ideas for new tours or improvements to the existing ones, just add your comments to the coordination page. You might also want to help translate the released tours (which is just like translating any wiki page). You can translate the existing tutorials about items and about statements.

A note from Lydia (Wikidata’s product manager): Thank you so much to Bene* (Wikidata community developer) and Helen (Free Software Outreach Program for Women intern with Wikimedia) who have worked together over the past weeks to make these first guided tours a reality. It’s great to see us making progress towards making Wikidata easier to use every single day.

 

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (Noch keine Bewertungen)
Loading...

Pushing Wikidata to the next level

(Die deutsche Version dieses Beitrags ist hier.)

In early 2010 I met Denny and Markus for the first time in a small room at the Karlsruhe Institute of Technology to talk about Semantic MediaWiki, its development and its community. I was intrigued by the idea they’d been pushing for since 2005 – bringing structured data to Wikipedia. So when the time came to assemble the team for the development of Wikidata and Denny approached me to do community communications for it there was no way I could have said no. The project sounded amazing and the timing was perfect since I was about to finish my studies of computer science. In the one and a half years since then we have achieved something amazing. We’ve built a great technical base for Wikidata and much more importantly we’ve built an amazing community around it. We’ve built the foundation for something extraordinary. On a personal level I could never have dreamed where this one meeting in a small room in Karlsruhe has taken me now.

From now on I will be taking over product ownership of Wikidata as its product manager.

Up until today we’ve built the foundation for something extraordinary. But at the same time there are still a lot of things that need to be worked on by all of us together. The areas that we need to focus on now are:

  • Building trust in our data. The project is still young and the Wikipedia editors and others are still wary of using data from Wikidata on a large scale. We need to build tools and processes to make our data more trustworthy.
  • Improving the user experience around Wikidata. Building Wikidata to the point where it is today was a tremendous technical task that we achieved in a rather short time. This though meant that in places the user experience has not gotten as much attention. We need to make the experience of using Wikidata smoother.
  • Making Wikidata easier to understand. Wikidata is a very geeky and technical project. However to be truly successful it will need to be easy to get the ideas behind it.

These are crucial for Wikidata to have the impact we all want it to have. And we will all need to work on those – both in the development team and in the rest of the Wikidata community.

Let’s make Wikidata a joy to use and get it used in places and ways we can’t even imagine yet.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (5 Bewertungen, Durchschnitt: 4,20 von 5)
Loading...

Data for the people!

(Die deutsche Version dieses Beitrags ist hier.)

In the first session of the first Wikimania, I presented the idea of enriching Wikipedia with structured data. Asked how long it would take to implement this, I answered: “Two weeks, if you know the MediaWiki software well.”

That was in 2005. It turned out, I would be slightly off.


Now, in 2013, we finally started using structured data from Wikidata in the Wikipedias. The project is still in its infancy, but I am already extremely proud of the Wikidata team and what they have achieved. I am very thankful to the many, many people that helped us get to where we are today (I started listing them explicitly, but this post became too long). There are still many things that need to be done, but the rough sketch of what Wikidata is and is not has been drawn, and I think we have created a very interesting new project. I am confident enough about Wikidata and its future, or else I would not be leaving.

Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (3 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...

A categorical imperative?

(Die deutsche Version dieses Artikels ist hier.)

This is the third in a short series of blog entries in which I explain some of the design decisions for Wikidata. The first one was about restricting property values or properties, the second about veracity and verifiability. The essays represent my personal opinion, and are not to be understood as the official opinion of the Wikidata project.

At first a name people doing knowledge representation care very, very strongly about: Barbara. Introduced about 2500 years ago by Aristotle (Teacher to Alexander the Great, who had conquered the known world and beyond by the age of 33. School and awesome teachers do matter!) and named a millennium later by my favorite philosopher, Boethius. (Seriously, this guy is awesome. He had everything you could have hoped for back in that time, and he lost it all. Read his bio. He had both his sons made consuls of the mightiest empire of the world, and then suddenly he got his riches taken, family members executed, and was awaiting his own execution in exile in a prison. Instead of lamenting, he sat down and wrote a book about what really is important in life. Read his Consolation of Philosophy. It remained on the bestselling list for a few centuries, not without a reason. Kings copied it by hand!) Barbara is part of the logical foundation of anything that has to do with classes. You might know classes as types, categories, genera, or anything else that is somehow taxonomical. Barbara is a type of syllogisms, thus a rule for correct reasoning. Modus Barbara states that if all A are B and all B are C, well then also all A are C. As an example: If we know that all billionaires are human, and we know that all humans are mortal, bang, all billionaires are mortal, too.
Weiterlesen »

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...

Wikidata quality and quantity

One of the goals of the Wikidata development project is a community that is strong enough to maintain the content in Wikidata. The community is – as with all other Wikimedia projects – the only guarantee of quality and sustainability.

None of the objectives of the Wikidata development project is to be the largest collection of data on the net. The sheer number of statements in Wikidata is not a metric that is indicative of healthy growth or quality. Since it is an easy to get and understandable number it is nontheless used a lot, but we should not attach too much importance to it.

This leads to the question, which metrics are meaningful for quality in Wikidata? And I have to admit: we do not know. This may seem particularly ironic since my dissertation was on the topic of quality measurement of knowledge structures. But it is not surprising: the opportunity to make statements in Wikidata exists since about half a year. The site is in continuous development, and some important pieces for quality assurance that are planned for Wikidata are not yet developed – including, for example, ranks for statements, web links as a data type, the protection of individual statements and aggregated views of the data. How to make quality measurable in Wikidata, which metrics correlate with quality – it has simply not yet been investigated sufficiently. I expect that science will provide some answers in the coming months and years.

To get an overview of the development of Wikidata, we must temporarily take assumptions about what numbers likely indicate quality. I do hereby call the community to make suggestions and discuss. A few first thoughts below.

The number of data elements (items) seems to not be a useful measure. This number is so far almost exclusively characterized in that items are required for the storage of language links. Accordingly, there was initially strong growth, while the links were transferred, and in recent months, the number is relatively stable.

The number of edits per page seems to be more meaningful. Last week it went above 5.0 and is rising quickly. The number of edits alone in Wikidata is less meaningful than in many other Wikimedia projects as an extraordinarily high proportion of the edits are done by bots. Bots are programs written by users to automatically or semi-automatically make changes. The bots are controlled by a group of about 80 users. This leads many to the idea that Wikidata is only written by bots. But that’s not true: every month 600000-1 million edits are performed by human user. These are numbers that can be reached only by the most active Wikipedias – including their own bot edits. Worries about Wkidata’s growth being too fast and that the quality of the data would suffer, have so far, except for anecdotes, not proven true.

Perhaps the simplest metric is the number of active users. Active users in Wikimedia projects are defined as the users who contributed at least five edits in a given month. Wikidata has nearly 4,000 active users, making it rank 6th among the most active of the Wikimedia projects together with the Japanese and Russian Wikipedia behind only the English Wikipedia, Commons, the German, French and Spanish Wikipedia. In other words, Wikidata has more active users than 100 smaller Wikipedias combined! Whenever the smaller Wikipedias access Wikidata, they rely on a knowledge base that is maintained by a much larger community than their own Wikipedia. But the advantages don’t end there: by using the content of Wikidata in the Wikipedias it becomes more visible, gets more attention, and errors are more likely to be found (although we still lack the technical means to then correct the error easily from Wikipedia – but that is on the development plan). This mainly benefits the smaller Wikipedias.

But it also already has useful advantages for the larger Wikipedias: An exciting – and for me completely unexpected – opportunity for quality assurance came when the English Wikipedia decided not to simply take IMDB IDs from Wikidata but instead load them from Wikidata to compare them with the existing numbers in Wikipedia, and in the case of inconsistency to add a hidden category to the article. This way difficult to detect errors and easily vandalisable data got an additional safety net: it may well be that you have a typo in the number on the English Wikipedia, or some especially funny person switched the ID for Hannah Montana’s latest film with that ofNatural Born Killers in the French Wikipedia – but now these situations are detected quickly and automatically. This data that is validated in several ways can then be used by the smaller Wikipedias with little concern.

As mentioned earlier, a lot is still missing and Wikidata is a very young project. Many of the statements in Wikidata are without a source. Even in the German Wikipedia the statement, Paris is the capital of France, does not have a source. We impose much stricter rules on a much smaller project after such a short time? But, then one may interject, if a statement has no source, I can not use it in my Wikipedia. And that is perfectly okay: it is already possible now, to just use data from Wikidata if they have a source of a certain type.

There are two ways to ensure the long term quality of Wikipedia: Allow user to be more effective or attract more users. We should continue to pursue both ways and Wikidata uses both ways very effectively: the mechanisms described above aim to give users the means to make more powerful tools and processes to build quality assurance, simultaneously Wikidata has already brought more than 1300 new users to the Wikimedia projects who had not edited in the other Wikimedia projects before.

Wikidatas main goal is to support the Wikimedia projects: it should enable higher quality of the content and reduce the effort required for the same. We need more metrics that capture this goal, and show how we evolve. The simple metrics all indicate that the initial growth in width has come to an end after months, and that the project is gaining in depth and quality. There are useful applications both for small as well as for large projects. But it is also clear that I am an avid supporter of Wikidata and so have a bias, and therefore start a call for ideas to track Wikidata’s effect critically and accurately.

1 Stern2 Sterne3 Sterne4 Sterne5 Sterne (2 Bewertungen, Durchschnitt: 5,00 von 5)
Loading...