This blog post is the third in the series about CommonsDB. If you don’t know about the project at all, I recommend to check out the first one, it also has a nice video introducing the project.

A rainy weekend in March, about 80 people gathered in Arnhem, the Netherlands for the Wikimedia Hackathon Northwestern Europe 2026. I came there with the hopes of getting some input on how CommonsDB could best be put to use in our projects, socialize our progress, and, of course, do a bit of hacking and improve our prototype.

Even early in the first day when I showed that an initial prototype, built by Sebastian Berlin, developer at Wikimedia Sverige, can help you check if an image you are about to upload already is declared in CommonsDB, I saw people understanding the potential. Since CommonsDB is a registry for public domain and openly licensed works, it can give assurance to the uploader that they are doing the right thing. However, the prototype in the state of the start of the hackathon required some clicks and weren’t making use of the CommonsDB search API. My mission was to make progress in that area.

Technical tinkering

The starting point was a user script that submits the image to a service that generates a code, the International Standard Content Code or ISCC for short. In the interface, it also gave you a link to the CommonsDB Explorer with the generated ISCC, so that you can click through and see if the image was already declared. For example, it could be a link like this one: https://registry.commonsdb.org/explorer/KEC6SZK633OFYX5YSBRWGPNMK3M7AAWE6AGBUYQPMSLOMFE22TJJ5AA That lets you look in the registry and see if there is any matches and find the link to the source if you want to explore more.

So far, so good. But it requires some manual work which we, thanks to the CommonsDB search API can reduce. My improvement during the hackathon was to use this API to get some answers in JSON format instead. For original images and images not declared in CommonsDB yet, it would just say that there was no match and would let the user just continue as per usual. If there was a match the script then looks at the answer and looks at who is the source of the image in CommonsDB. If the source is not Wikimedia, it then shows which license the image has and who made that claim with a direct link to the original source. The user then are helped for the next step when selecting the license and can also investigate if the image looks curious for some reason.

If the results include a match with Wikimedia as source, information is shown to the user that there is already a similar match on Wikimedia Commons, including a thumbnail and a link. The user can then decide if they still want to upload a new version or not. Since the perceptual search can find a match even for an image it would, for example, likely show a match for an image that have had digital restoration work or similar, which are appropriate new uploads.

In the movie below, we see two use cases. The first is an upload of an image from Europeana that is in CommonsDB. The user gets a link and click through and see that it looks correct if they didn’t already know that this was the source. In the second one, a user starts an upload of a heavily downscaled version of an image that was declared by Wikimedia Sverige in CommonsDB. The user then gets information and a thumbnail and gets to decide what to do. (In this case, hopefully cancel the upload.)

A demo of how the upload process could be supported by CommonsDB.

How could this look like in the future?

There’s still a lot of work to do, both when it comes to the visual design and covering corner cases, how to show this when doing multiple uploads or what to do if there are multiple matches in the register. But the basic workflow would be similar but even more smooth for the user. If there is no match for example, there is not really a reason to show anything for the user.

When there is a match, in addition to showing the license, it would be neat to also make a suggestion for what license template to use making it much easier for the user to get it right.

Some feedback received during the hackathon was to also log when the user made another choice than the suggestion. This could be used to either see if the user made an error or a judgment call that another rights statement was more appropriate for Wikimedia Commons.

Currently, this is a user script, and it could perhaps be created as a gadget. But even better would be to integrate it even deeper, perhaps as an extension, together with a Wikimedia-hosted ISCC-code generating service. This would reduce the time it takes to get the code significantly.

Big auditorium and screen and a man behind a computer on a podium with a mic in hand pointing at the screen.
Jan during the showcase, frantically trying to keep up with the recorded video.

Finding a new use case

While we have some listing of potentials at Meta-Wiki, and some other implementation ideas, during the hackathon we also heard about another use case from Wikimedia Ukraine and Wiki Loves Monuments. They would benefit from a tool where users upload many similar images shot in “burst mode”, meaning that are not exactly identical, but often strikingly similar. Experiments with a couple of example images verified that such images could indeed be identified through the perceptual hash ISCC provides. While we already envisioned a tool that could compare existing images on Commons, this showed that there are more cases for potential “duplicates” to be found.

Other related development at the hackathon

CommonsDB also supports a declaration to provide an argument of why an image is in the public domain, they call this PD rationale. Thanks to the ground laying work by Paulina and others, using Wikidata as the identifier for these statements has been the choice of CommonsDB as well.

During the hackathon, a group of people were making progress on how to model copyright statements in the structured data on Commons. When implemented on scale, this will provide a great way for us to submit those as the PD rationales to CommonsDB.

For now, there are a lot of templates on Wikimedia Commons giving this information for an image. This is great for a human, but it could be more accessible for machines. Currently we are mapping license templates towards Wikidata items about public domain reasons so that we can already use this in our declarations. It might also be useful for adding this information to the structured data on Commons in bulk. You can help by mapping more templates.

What’s happening next in the project?

We are continuing to declare files from Wikimedia Commons to CommonsDB. We just passed the one million mark, and hopefully we will have declared several millions by the end of the project. Still a long way to go for all files on Commons, but a start large enough to show the value of a registry like this.

We’ll also continue developing the prototype, to show the value for the community more clearly and possibly even have something useful for regular users. Not only will we be doing that work at home, we will also be at the hackathons in Milan and before Wikimania in Paris. Please talk to us if you are there and are curious or have ideas.

In parallel, we’ll also try to make more demos for some of the other ideas that we have. If you come up with any new ideas for how either the CommonsDB registry or access to perceptual hashes to find similar images could help you as a Wikimedian, please let us know!

Sharing knowledge

A hackathon is of course about meeting people and collaborating, not only stare at your own screen and only work on your own projects all the time. I was happy to be able to make myself useful for other participants as well.

One of my favorite tools on Wikidata, Integraality / Property dashboard, needed some input for how it is being used in practice and the maintainer Jean-Frédéric interviewed me as user research.

User:Spinster had some ideas about data sets and Wikidata and we also had a great conversation about what could be done, perhaps notably in terms of outreach.

User:ItsNyoty wanted to add an image blurring filter for sensitive images as a gadget, and I had just seen one being deployed on Swedish Wikipedia only hours earlier and I could connect them. This turned into a crosswiki collaboration with people not even at the hackathon and Dutch Wikipedia getting an advanced version deployed quickly.

Since I love hackathons, and this was a well-organized one, I also took the chance to record a podcast episode (also on Commons) with two of the organizers with the hope of inspiring more people to run local and regional hackathons.

Finally I want to point to the showcase page for the hackathon where you can read more about what other people hacked on, and also find links to other participant’s summarizing blog posts.

Image
WikiSami Workshop at Universitas Hindu Indonesia, Bali (Kasu Wardana CC BY 4.0)

WikiSami (Sum of All Manuscripts Bali) is a Wikimedia-supported initiative that aims to identify and reconnect Balinese palm-leaf manuscripts (lontar) preserved across international collections. Beyond documentation, WikiSami seeks to bring these manuscripts back into conversation with Balinese communities, especially younger generations, by creating spaces where manuscript heritage can be engaged with, discussed, and reinterpreted in locally meaningful ways.

This series of university workshops grew out of an earlier journey at Leiden University Library. While working with the Balinese manuscript collection there, the WikiSami team spent time identifying and mapping hundreds of lontar manuscripts belonging to the geguritan genre. When returning to Bali, it became clear that this work should not remain confined to archival lists or research notes. The workshops were conceived as a way to carry these encounters with manuscripts back home, inviting students to engage with texts that continue to live through recitation, study, and everyday cultural practice in Bali.

University Workshops in Bali

The workshops were held at two universities in Bali: Udayana University and Universitas Hindu Indonesia (UNHI). At Udayana University, the participants were students of Old Javanese studies, while at UNHI the workshops involved students with strong backgrounds in the Balinese language and literature. In total, around fifteen students took part. Although many of them were already familiar with lontar manuscripts from their academic training, most had never contributed to public knowledge platforms before.

Rather than approaching the workshops as formal lectures, the sessions were designed as collaborative spaces for discussion and practice. Students were encouraged to ask questions, share their experiences with Balinese literary traditions, and work together throughout the activities.

“It was really exciting to gain new knowledge and insights. I hope there will be another WikiSami workshop this year.” Username: seraphine

“It was enjoyable and gave me a new experience, as I had never tried creating a Wikipedia article before. I hope this activity can continue in the future.” Username: Kirana Nayaka

Writing About Geguritan from the Leiden Collection

The central activity of the workshops was the creation of Balinese Wikipedia articles about lontar manuscripts from the Leiden University Library collection, with a particular focus on the geguritan genre. Geguritan is a traditional Balinese poetic form that continues to be sung, recited, and taught in Bali today, making it a genre that bridges manuscript heritage and living cultural practice.

Working with information metadata, participants explored the content, context, and significance of selected geguritan texts. They then collaborated to write Wikipedia articles in Balinese, aiming to make the information accessible to a broader audience. Through this process, students reflected on how manuscripts preserved outside Indonesia can still speak to local readers when presented in a familiar language and cultural framework.

Why Balinese Wikipedia Matters

Writing in Balinese Wikipedia played a crucial role in the workshops. For many participants, this was their first experience using Balinese as a language of public knowledge production. The act of writing in Balinese allowed students to address local readers directly and to frame manuscript collections in ways that resonate with community knowledge and cultural values.

At the same time, working with manuscripts held abroad prompted discussions about the global journeys of Balinese heritage. Students became more aware of how lontar manuscripts have travelled across borders and how digital platforms can help reconnect these dispersed collections with the communities they originate from.

Learning Through Participation

The workshops emphasized learning through participation. Students learned by editing, discussing sources, revising texts, and supporting one another throughout the writing process. This hands-on approach helped lower barriers to participation and demonstrated that contributing to shared knowledge does not require advanced technical skills but curiosity, collaboration, and care for the subject matter.

For several participants, the workshops marked their first step into contributing to open knowledge. Many expressed interest in continuing to write and explore other ways of engaging with Balinese manuscripts beyond the classroom.

Looking Ahead

These university workshops represent an important step in WikiSami’s broader effort to reconnect international manuscript collections with local communities in Bali. Before these activities, the project focused on curating and inputting metadata of Balinese manuscripts from the British Library into Wikidata. So far, a total of 154 manuscript records have been contributed, forming a structured foundation for further work. 

The workshops build on this foundation by bringing the data back into local contexts. This lets students interact directly with manuscript collections that were mostly only available through archival research before. Future activities will extend outreach beyond universities to schools and other educational settings, creating opportunities for younger audiences to engage with manuscript heritage in creative and participatory ways.

By transforming archival encounters into shared learning experiences, WikiSami hopes to ensure that Balinese manuscripts are not only preserved digitally but also actively read, discussed, and reimagined by the communities to which they belong.

Image
Visual collage depicting milestones in the Wikimedia Foundation’s engagement with the United Nations system. Image by Wikimedia Foundation, CC BY-SA 4.0, via Wikimedia Commons.

December 2025 marked a landmark moment for the Wikimedia Foundation: After years of engagement within the United Nations (UN) system, the Foundation delivered remarks at the UN General Assembly Hall about the future of the internet.

As the most important organization in the world for proposing, discussing, and setting global policy, the UN is a critical space to display the community-led and governed Wikimedia model. There we can demonstrate why the model is an effective online approach to promote and protect an open, reliable, and inclusive internet—and how it delivers the promise of a positive vision of global internet governance.

We want the internet to remain global, free, and open so that the Wikimedia projects can continue serving billions in their public interest mission. To achieve this, especially when internet and technology regulation is rapidly expanding, we must persuade governments around the world about how critical it is to: support and protect those who make open knowledge possible, and preserve community-governed digital spaces in the age of artificial intelligence (AI) and other emerging technologies.

None of these things can be taken for granted. Changes to the world and technology affect how people are accessing and sharing information online and, in many countries, how—or even whether—they can do so safely. In an increasing number of places, both people and online spaces are under threat. The Wikimedia projects are no exception.

To accomplish our mission—a world in which every single human being can freely share in the sum of all knowledge—we must educate and collaborate with those who are deciding how we will react to these changes. Doing so is the only way to ensure a positive vision of the future of the internet.

In this blog post, we explain how we have worked to:

  • Secure that our collective voice can be heard at the UN,
  • Contribute to define the digital future,
  • Achieve recognition and positive impact, and
  • Lead conversations based on shared values.

Why the UN matters to the free and open knowledge movement

In the destructive wake of the Second World War, leaders of nations across the world united to create the first truly global organization in 1945. The United Nations would serve as a single institution where every country could eventually become a Member State to set global rules together. The goals were global security, strengthening relations among the world’s countries, and cooperating more effectively to establish and ensure standards for better social and economic conditions as well as fundamental human rights.

Today, the UN system serves its many functions by means of six bodies, multiple specialized agencies, and numerous programs and funds. Presently, 193 Member States work together daily to solve international problems through diplomacy, upholding international law, and delivering humanitarian aid. Even with the current challenges facing the UN, it continues to offer a platform for diplomacy and supports billions of people globally.

Since Wikimedia projects are part of a truly global movement, they are affected by national regulations and international frameworks alike. For the projects to continue offering the sum of all knowledge globally, those driving digital governance need to understand both how the projects work, and why the digital commons are essential to a healthy, equitable online information ecosystem. We engage across the UN system, Member States, and civil society to explain how open knowledge and the communities that sustain it ensure the quality of the world’s public information infrastructure.

Our world is marked by geopolitical tensions, the accelerated development of generative AI, growing restrictions on digital rights and freedoms, and the spread of misinformation. Driven in part by the concerns of civil society, UN Member States have worked hard to catch up, formulating and implementing stricter regulation at the national and international level. These frameworks are based on different visions of what should be the future of the internet. The open web that we have today is often contrasted with a more tightly managed internet. This surge of various kinds of laws can impact the Wikimedia movement in multiple areas: intermediary liability protections, content moderation, surveillance and privacy, copyright and intellectual property, and AI and data protection.

As the world changes, so does the environment in which free and open knowledge must continue to operate in order to survive. For all of these reasons, when the Foundation and Wikimedia affiliates work with and at the UN, we ask for two main things:

  • First, to protect the rights of the people who discover, report, research, curate, translate, analyze and read facts on Wikipedia.
  • Second, to support and protect community-led, public interest projects that create digital public goods when regulating the internet to address real harms. 

These are not abstract public policy questions: they are the North Stars that have guided our journey of engagement with the UN.

A photograph of the United Nations headquarters in New York City taken from Roosevelt Island
The United Nations headquarters in New York City viewed from Roosevelt Island. Image by Neptuul , CC BY-SA 3.0, via Wikimedia Commons.

Securing our collective voice

The Foundation engages with the UN because the policies, norms, and decisions shaped there today will define what internet governance—and therefore the Wikimedia projects—will look like tomorrow. In our engagement with other stakeholders, we strive to protect the conditions that will allow us to realize our vision that everyone, everywhere, can participate in free and open knowledge.

During October 2020, we entered a pivotal collaboration with the UN, working directly with the World Health Organization (WHO) to make accurate health information available in hundreds of languages during the COVID-19 crisis. This worldwide initiative exemplified the public interest nature of the Wikimedia projects and their commitment to enabling access to reliable information in moments where information integrity is critical.

In order to more effectively represent the efforts of Wikimedians and other open internet advocates with the UN, the Foundation became an accredited observer at the UN Economic and Social Council (ECOSOC) in 2022. This consultative status has allowed us to participate as a stakeholder in discussions that can have a critical impact on the Wikimedia projects.

These milestones demonstrated an understanding of the positive impact of the Wikimedia projects and volunteer communities and secured a formal voice that positions us to engage in shaping landmark digital governance frameworks in the years ahead.

Contributing to define the digital future

In recent years, two significant UN-level processes are shaping the future of the internet and how the Wikimedia projects can continue to exist online. One is the Global Digital Compact, a worldwide framework for digital cooperation and governance of digital technologies. The other is the review of the World Summit on the Information Society (WSIS) 20 years later, which we will discuss later in this post.

The Global Digital Compact

In 2024, UN Member States negotiated a Global Digital Compact: a comprehensive global framework for digital cooperation and the governance of digital technologies and artificial intelligence. The Compact was approved unanimously at the Summit of the Future in September of that year at the UN headquarters.

While the Compact was being discussed and drafted, its various drafts were open to commentary and feedback worldwide. From the very beginning of the discussion and until the vote on the final draft, the Foundation and Wikimedians engaged extensively with the process.

In our contribution to the UN’s civil society consultation, we advocated that the Compact highlight the importance of human rights online and a multistakeholder governance system—that is to say, a system where not only governments, but also nonstate actors such as civil society organizations like the Foundation, Wikimedia affiliates, and the technical communities have a say in matters of digital governance.

We began a campaign, which included the Foundation and Wikimedia affiliates drafting an open letter together to advance three main requests. Published in April 2024, we called upon the drafters of the Compact to:

  • Protect and empower communities to govern online public interest projects;
  • Promote and protect digital public goods by supporting a robust digital commons from which everyone, everywhere can benefit; 
  • Build and deploy AI and machine learning to support and empower, not replace, people who create content and make decisions in the public interest.

Our Compact campaign culminated at the Summit of the Future in September 2024—a special gathering where the UN General Assembly approved the Compact. At the Summit, the Foundation cohosted a high-level event. We brought together Member States, UN representatives like the Secretary General’s Envoy on Technology, and civil society actors, who spanned from Jimmy Wales, founder of Wikipedia, to academics and private sector representatives. We presented our vision of the internet and what is needed to protect digital public goods like Wikipedia, which are grounded in a robust digital commons and are essential for an inclusive, open, sustainable digital world.

We were encouraged to realize that our requests were shared by many others and also partially reflected in the final, approved Compact. At the same time, the efforts we committed to the Compact continued to build momentum for the recognition of two Wikimedia projects as digital public goods as well as lay groundwork for continued multistakeholder advocacy.

A photograph of Wikimedia affiliates working on the Global Digital Compact at the Wikimedia Summit 2024
Wikimedia affiliates working on the Global Digital Compact at the Wikimedia Summit 2024. Image by Owala kpapko, CC BY 4.0, via Wikimedia Commons.

Achieving recognition and positive impact

Wikimedia projects serve as part of the world’s digital public infrastructure, informing billions of people across every region and over 300 languages. The projects operate using a particular model: led by our volunteer community, with respect for privacy, without for-profit incentives, and through open and free, reliable content that serves the public interest. Wikimedia volunteer contributors develop and enforce policies that ensure information integrity on the projects, providing valuable examples of how human agency, community-led governance, and citing reliable sources can create trust online.

Wikipedia and Wikidata recognized as digital public goods

The Wikimedia projects’ unique contributions to society worldwide as well as its commitment to constructive values led to two important recognitions in 2025: Wikipedia and Wikidata were acknowledged as digital public goods. The Digital Public Goods Alliance (DPGA), a multistakeholder UN-endorsed initiative, included the projects in their registry of open-source software, data, AI models, standards, and content. Digital public goods are acknowledged as being created to benefit people across the world. They must fulfill other strict criteria, which include causing no harm, following best standards and practices, ensuring data protection and privacy, and contributing to the advancement of the Sustainable Development Goals (SDGs).

Demonstrating how the Wikimedia model maintains information integrity

Despite Wikimedia having widespread brand recognition across the world and the recognition of the contributions that the projects offer the general public, government officials rarely understand open knowledge and community-led models. As a result, internet laws and regulations can risk ignoring or weakening these public interest platforms. It is crucial to explain to the UN and Member States how the web can be governed in the public interest: through transparency, community participation, and structures designed to protect human rights and information integrity rather than simply profit.

During 2025, the UN launched their Global Principles for Information Integrity. These principles seek to build fair, diverse, and inclusive online spaces, where people are empowered rather than exposed to disinformation, hate, or monopolized information flows. To highlight how the Wikimedia model and volunteer communities support these principles, we organized presentations and training to multiple UN agencies and departments. These exchanges helped create a better understanding of how Wikimedians collaborate to make sure that content on the projects is reliable. They also created more direct connections and involvement between the UN offices and volunteers in multiple countries, providing opportunities for continued conversations and lessons learned about knowledge creation and sharing processes as well as alternative models of digital governance.

Various other activities demonstrated in action how the Wikimedia model works and maintains information integrity. In 2025 alone:

  • The Foundation and Wikimedia NYC organized an edit-a-thon at the United Nations headquarters during UN Open Source Week to engage with UN and Member States delegates, among other guests, and expand and update Wikipedia articles together.
  • UN officials from the United Nations Information Centre (UNIC), United Nations Peacekeeping, and the United Nations High Commissioner for Refugees (UNHCR) attended Wikimania in Nairobi, Kenya, to join numerous panels hosted by Wikimedians and discuss information integrity in the digital age with volunteer contributors during a dedicated panel.
  • During UNGA High-Level Week, the Foundation presented an international Wikimedian project to improve and maintain information integrity on extreme weather and climate-related events on Wikipedia and Wikidata. Recognizing the importance of this work, the Global Initiative on Information Integrity on Climate Change (established by the Government of Brazil, the UN Secretariat, and the United Nations Educational, Scientific and Cultural Organization [UNESCO]) awarded it a grant to continue and scale its available contributions.

These opportunities to connect Wikimedians directly with UN officials and processes help demonstrate how the projects work, provide spaces to voice the needs and concerns that underlie the creation and curation of open knowledge, and create opportunities for the UN and Permanent Representations to work with Wikimedians on promoting and protecting reliable information worldwide.

The project’s results and awarded grant show that Wikimedians’ efforts are already recognized as critical to maintaining information integrity and achieving UN-coordinated objectives.

Image
(from left to right) Amandeep Singh Gill (United Nations Under-Secretary General and Special Envoy for Digital and Emerging Technologies), Jayantha Jayasuriya (Permanent Representative of Sri Lanka to the United Nations), Maurizio Massari (Permanent Representative of Italy to the United Nations), and Rebecca MacKinnon (Vice President of Global Advocacy, Wikimedia Foundation) at the 2025 UN Open Source Week edit-a-thon. Image by SkaterbyAssociation, CC0, via Wikimedia Commons.

Speaking through the world’s largest policy microphone

Throughout 2025, the UN conducted a 20-years review of the World Summit on the Information Society (WSIS+20) The WSIS+20 review assessed the state of global progress in using information and communication technologies for development and expanding online access in lesser-developed countries. This process included a high-level event in Geneva and concluded with the UN General Assembly’s comprehensive review in December.

During the UNGA High-Level Week in September, the Foundation joined the UN Digital Cooperation Day, where we reflected on the first year of implementation of the Global Digital Compact and discussed the WSIS+20 process with Member States and others during a dedicated roundtable. The Foundation highlighted the need for the WSIS principles to focus on digital public goods, online community-led projects, and the public interest internet.We partnered with a coalition of Wikimedia affiliates to engage with the WSIS review, seeking to promote the multistakeholder system, human rights online, and an internet that protects and promotes community-led spaces. Alongside Wikimedia affiliates and other allies, we signed a statement from Global Partners Digital that urged the UN and its Member States to ensure all stakeholders were meaningfully consulted during the last phase of the review.

The Foundation’s efforts culminated at the UN General Assembly’s comprehensive review in December, when we addressed the UN and Member States. Our remarks celebrated that the multistakeholder model brought forth by WSIS has been a successful and productive collaboration framework—which echoes Wikimedians’ belief that when people are free to self-govern, public spaces are more inclusive and open, enabling participation in the sum of all knowledge across the world. The multistakeholder model allows civil society groups like the Foundation and Wikimedia affiliates as well as the private sector and technical groups to have a say on how the internet works and contributes to the public good.

Our allies and we have a positive vision of the future of the internet: one where representatives from civil society, industry, academia, and other NGOs are included in such crucial debates and processes, ensuring their results are inclusive and beneficial worldwide. There is no larger policy microphone than the one which we used last December to share our hopes for the digital future we are already building today.

These 2025 achievements, from the DPGA recognitions to hands-on demonstrations to high-profile advocacy at the UN General Assembly, have solidified the Foundation’s leadership position for ongoing global digital policy conversations.

Leading conversations based on shared values

Making sure that Wikimedian voices are heard so they can explain what makes our model successful during these processes is not just a nice-to-have: it is an essential part of protecting the Wikimedia projects and promoting a community-led model of online governance, particularly critical in the age of AI and other emerging technologies.

In these challenging times of heightened geopolitical tensions and an uncertain digital landscape, participating in global policy spaces is more important than ever. At the United Nations, the Wikimedia movement can demonstrate a concrete, established, successful model of a community-led digital space that prioritizes trustworthy information, promotes human rights online, and reflects our core values: openness, human rights, and collaboration. It is a chance to show what an internet built around people and public interest and value can look like.

Going forward, our work with the UN and its specialized agencies, Member States, and our allies in civil society and the private sector will continue and deepen. We will:

  • Strengthen Wikimedia’s presence in UN-led processes on digital governance, human rights online, and information integrity.
  • Share evidence and lived community experience to inform policies that affect how knowledge is created, accessed, and preserved.
  • Build alliances with governments, civil society, academia, and the technical community to protect collaborative, community-governed spaces.
  • Elevate the voices of volunteers and communities—those who make free knowledge possible—across international forums, and
  • Advocate a digital environment that safeguards open knowledge, supports public-interest infrastructure, and ensures equitable access for all.

The Wikimedia movement will continue to show up, contribute, and collaborate. Together with global partners, we can help shape an online world that prioritizes people, communities, and the universal right to seek, receive, and share information.

_____

Stay informed on digital policy, Wikipedia, and the future of the internet: Subscribe to our quarterly Global Advocacy newsletter! 📩 And if you want to know more about our UN engagement, please reach out to the Global Advocacy team—we value your feedback and interest.

Image
Robert Obiri

The Volunteer Supporters Network is delighted to announce that we have confirmed Robert Obiri from Wikimedia Ghana User Group as the new node for the VSN Hub Pilot.

As part of the Hub pilot, the Volunteer Supporters Network committed to adding a new node to the VSN Management team. The VSN community will remember that we put a call out in December 2025, and we held interviews with shortlisted candidates in February 2026.

As with other members of the Management team, this work is funded through the grant given to us by the Hubs fund. The VSN does not have a Hub coordinator (or similar role) as we chose to explore a slightly different model where the management team could take on VSN work as part of their existing Affiliate roles. Our interest here is to explore if this is beneficial in deeply rooting the work of the Hub in the immediate experience of those who work to support volunteers within the movement.

Strengthening connections

Here is what Robert had to say:

“My name is Robert Obiri, a Wikimedian and volunteer with the Wikimedia Ghana User Group since 2019. I am excited to be joining the Volunteer Supporters Network (VSN) Hub pilot as the Project Lead, an initiative that aims to strengthen connections between Wikimedians who support volunteers across the Wikimedia movement.”

“I look forward to contributing to the expansion of the network’s reach and bringing more diverse perspectives into conversations about supporting volunteers across the movement. Joining the VSN Hub pilot is an opportunity to connect with other volunteer supporters globally, share experiences from community work in Africa, and learn from peers across the movement. I look forward to collaborating with members of the Volunteer Supporters Network and contributing to building stronger, more resilient volunteer communities across the Wikimedia movement.”

We are very excited to be working with Robert, who will bring his experience as a Community Manager and Programme Manager to our work. He also has extensive experience in regional networking, and a good understanding of the wider Wikimedia movement. Along with Vic Sfriso and Sara Thomas, Robert will be working on the following items:

  • Helping to organise regular knowledge sharing or peer-to-peer learning meetings in line with the VSN model
  • Outreach activities to increase membership of the VSN
  • Contributing to the development and sourcing of new learning resources to be shared through the VSN and its partners
  • Contributing to the development and maintenance of VSN partnerships
  • Attending and taking part in the annual meeting of the VSN
  • Attending regular meetings, including VSN Management (1 per month), with the Hubs team (1 per month), and with the Advisory group (1 every 2 months)
  • Representing the VSN at any appropriate events
  • Contributing toward reporting and metrics

Thank you so much for joining us Robert!

Find out more about the Volunteer Supporters Network, our work and the Hub Pilot.

volunteer supporters network logo
TitiNicola, CC BY-SA 4.0 , via Wikimedia Commons
Image

The Wiki Science Competition in Argentina

In recent years, Argentina has participated in two editions of the Wiki Science Competition (WSC), the international scientific photography competition started by Wikimedia Estonia, which later expanded to other countries through national editions coordinated within the framework of a common international organization. The objective of the competition is to visually document science and make that material available under free licenses. In both editions, the organization of the competition at the national level was led by the WikiUNLP team, which coordinated the call for entries and related activities in our country, promoting the participation of Argentina’s scientific and photographic communities.

The 2023 and 2025 editions featured hundreds of images related to scientific work, ranging from field photographs and records of biodiversity to images obtained in the laboratory or through microscopy. All images from the competition were uploaded to Wikimedia Commons, where they are available for reuse in Wikipedia articles, educational projects, and outreach materials.

In total, more than 350 photographs were submitted for the 2023 edition, while the 2025 edition featured 508 images, reflecting the growing interest in this type of initiative within the local community.

The 2025 edition also benefited from the support of the Argentine Association for the Advancement of Science and the Argentine Astronomical Association. This collaboration is significant because it involves scientific associations interested in open access to data, documents, and content on the internet, and serves as another link to the scientific and academic community.

Photographing science

The Wiki Science Competition seeks to expand the ways in which scientific knowledge circulates online. In many cases, scientific and academic output is accompanied by valuable visual records that often remain confined to academic or institutional settings. By inviting researchers, students, research teams, and scientific photographers to share these images on Wikimedia Commons, the competition allows this material to become part of an open repository that supports Wikimedia projects, expands access to scientific output and knowledge, and facilitates its reuse for educational and/or scientific purposes. 

The images are organized into different categories that reflect the diversity of scientific practices:

  • People in science, featuring images of researchers and research teams in their work environments.
  • Microscopy images, featuring images obtained using optical or electron microscopes. 
  • Nature, which includes photographs of organisms in their natural habitat. 
  • Astronomy, dedicated to images of the sky, astronomical viewing and astronomical equipment.
  • Image sets, which present visual series related to the same scientific phenomenon or process. 
  • Non-photographic media, including video and audio files, and computer-generated imagery.
  • General category, which includes other images related to various scientific disciplines.

Supporting local participation

In the 2025 edition, WikiUNLP also organized an activity designed to support those who wanted to participate in the competition but were not very familiar with the Wikimedia ecosystem. To this end, a hands-on virtual workshop was held, focusing on:

  • How to upload images to the project,
  • How to write clear and helpful descriptions, and
  • What to consider in order to participate in the international competition.

Initiatives like these not only facilitate participation in the competition, but also help bring the scientific community closer to Wikimedia projects and open knowledge practices.

The winning images 

You can find the 49 winning images from the competition here.

You can find the 508 images that were entered in the competition here.

What the WSC leaves us with

he 2023 and 2025 editions demonstrate the potential of these types of initiatives to strengthen the link between science and Open Knowledge. The competition allows researchers to share images that are part of their daily lives and that often remain outside mainstream public circulation. When these images are uploaded to Wikimedia Commons, they enrich the ecosystem of Wikimedia projects and can be reused to illustrate Wikipedia articles and other educational materials.

At the same time, participating in the international competition opens up the possibility for images produced in local contexts to reach a global audience, thereby increasing the visibility of the scientific work being carried out in the region. For WikiUNLP, promoting these types of initiatives is also part of a broader objective: to strengthen the presence of local scientific output within the Wikimedia ecosystem and to encourage scientific and academic communities to actively participate in the creation of open knowledge.

Beginning of the Campaign: A Virtual Launch Uniting Nigeria’s Wikimedia Communities for the Wikiforhumanrights 2025 Campaign

The WikiForHumanRights 2025 campaign in Nigeria began with a national virtual launch that brought together editors from different Wikimedia communities across the country. For the second year running, the campaign united seven Wikimedia communities, reflecting Nigeria’s regional and linguistic diversity, from Yoruba-speaking communities to Igbo, Hausa, and Igala-speaking communities across the South-West, South-East, South-South, North-West, North-East, and North Central regions. The launch set the tone for a collaborative national effort focused on the theme Our Rights, Our Future, Right Now. Organizers outlined a shared vision to document the Right to a Healthy Environment, emphasizing how environmental challenges in Nigeria are directly connected to health, livelihood, housing, and food security.

During the launch, Associate Professor at the University of Lagos, Dr. Rose Alani, who was a special guest, highlighted the urgent need for documentation and advocacy in the face of Nigeria’s environmental challenges. She noted that while the country faces widespread environmental degradation, efforts to document these issues remain limited.

According to her, proper documentation is a critical first step toward raising awareness and inspiring social change.

“In Nigeria, the environment has borne the high costs of extraction, neglect, and injustice. Issues such as oil spills, illegal mining, plastic waste, desertification, air pollution, and indiscriminate dumping of hazardous materials are not isolated incidents. They represent a pattern of environmental injustice that directly affects human rights,” she said.

Dr. Alani added that Wikipedia plays a powerful role in shaping how global audiences understand local struggles. By documenting Nigeria’s environmental challenges on a widely accessed platform like Wikipedia, communities are ensuring visibility, representation, and sustained advocacy for environmental justice and equitable access to human rights.

The President of the Wikipedia Nigeria User Group, Mr. Olushola Olaniyan, was also a special guest during the launch, he emphasized the role of the media and open knowledge platforms in driving change. He stated that by supporting Wikipedia’s free learning ecosystem, more Nigerians can be empowered to lead environmentally conscious initiatives within their communities.

Euphemia Uwandu, a program officer at the Wikimedia Foundation was present at the virtual event and she highlighted the purpose of the program, its main goals and the objectives of what the campaign entails, Ruby D-Brown who is the Wikiforhumanright 2025 regional coordinator for Africa was present and she spoke extensively about the program in Africa and how Africans have contributed immensely to the success of the program since its inception.

Editors were equipped with clear campaign goals, timelines, and content priorities across Wikipedia, Wikidata, Wikimedia Commons, and Wikivoyage, where the climate section was added to articles. This year, translation was added, and many articles were translated into Hausa, Igbo, Igala, and Yoruba.

This collective approach reinforced the mission of the Wikimedia Foundation to support free, inclusive, and locally grounded knowledge, while aligning with the global WikiForHumanRights initiative supported by the Wikimedia Foundation.

Campaign Focus: From Global Theme to Local Realities

The 2025 theme moved the campaign beyond abstract discussions of human rights. In Nigeria, rights are inseparable from land, water, air, and the environment people depend on daily. The campaign’s primary objective was to bridge the gap between local environmental crises and global knowledge equity, ensuring that lived experiences in Nigerian communities are visible on Wikimedia platforms. Editors focused on seven regions, each facing distinct environmental and human rights challenges. Through structured training, editathons, and peer learning sessions, contributors strengthened their skills in research, sourcing, translation, and how to write and contribute to Wikipedia and its sister projects. This year, the translation drive was activated.

Community Stories: Local Voices, Shared Impact

Anambra State by Dr. Ngozi: Editors in Anambra documented the human rights implications of severe gully erosion, flooding, and land degradation. Contributions highlighted how environmental destruction threatens the right to housing, food security, and safe livelihoods, displacing families and reducing access to arable land.

The Anambra Network of Wikimedia User Group Nigeria had one physical event and three online sessions. The campaign was led by Ngozi Perpetua Osuchukwu. The series of training was facilitated by Ngozi Perpetua Osuchukwu, Goodluck Ajunwa, and Peace Chinwendu Anyanwu. The participants were trained, and they contributed to Wikipedia, Wikidata, and Wikivoyage. They also translated English articles to Igbo language. The physical event took place at Book Foundation, Awka, with 21 participants in attendance. We partnered with Justice Developments and Peace Commission (JDPC), Onitsha, and Green Growth Africa. The participants built their capacities in digital literacy. 

Image

Wikimedia Gombe Network By Atiba: In Gombe, the focus was on desertification and drought, examining how climate pressures are shrinking farmland and increasing food insecurity. Contributors linked environmental stress to broader socioeconomic vulnerabilities affecting rural communities.

Led by Ismail Atiba with support from Umar Faruk, the campaign combined hands-on training and coordinated editing activities that supported both new and experienced editors. Participants worked on Wikipedia, Wikidata, and Wikimedia Commons, contributing content that reflects the lived realities of communities affected by flooding, erosion, environmental pollution, and resource pressure in Gombe and beyond.

While the campaign achieved meaningful content outputs, it also surfaced important lessons around coordination, communication, and role clarity. These reflections are now shaping how the community approaches planning, mentorship, and collaboration in future initiatives.

Image

Osun State by Adetoro Praise: In Osun, editors explored the impact of illegal mining and land degradation, highlighting how unchecked extraction damages ecosystems and disrupts community life. Contributions emphasized the connection between environmental harm and weakened access to basic rights. The Campaign was led by Praise Adetoro with support from Ajeigbe Rukayat. We had one physical event and three online sessions. The participants were trained, and they contributed to Wikipedia, Wikidata, and Wikimedia Commons. They also translated English articles to Yoruba language.

The physical event took place at Obafemi Awolowo University with 21 participants in attendance. We partnered with the Student Union and the Institute of Ecology. 

The Campaign exposed and empowered participants with the right skills and knowledge to contribute to their rights. It also provided a platform to advocate for the sustainability of their various environment. 

Over 30 members were impacted and challenged to stand up for their human rights and put in the effort to advocate for them. They all looked forward to having more of this campaign. 

Likewise, Obafemi Awolowo University recognized this amazing initiative and has added this project to their curricula and made it more remarkable by inviting professionals and researchers to the next project. 

Image

Kaduna State By Ramatu A Haliru: Kaduna contributors focused on land disputes and resource scarcity, documenting how competition over land and water often escalates into broader human rights challenges, including displacement and livelihood loss.

This Campaign was led by the WUGN Kaduna Network community. We engaged in activities that concern general awareness and understanding of Human rights among the Kaduna community, and three papers were presented for the understanding. However, we had one physical meet -up and three online sessions, which we concentrated on leadership, mentorship, and hands-on training on Wikipedia, wikidata and Wikimedia Commons. Articles were translated into Hausa, wikidata items about hospitals, and the health environment were all created, and images regarding these were also uploaded. Finally, the campaign improved the skills, knowledge, and understanding of our participants concerning Human Rights.

Image

Igala Community by Henry Ojonugwa: The Igala Wikimedia community prioritized language inclusion, translating key environmental and human rights content into Igala. This ensured that local communities could access information in the language they use daily, strengthening participation and ownership of knowledge.

When the WikiForHuman Rights campaign arrived in the Igala Community, it felt like more than an event it felt like a shared mission. Editors, learners, and first-time contributors gathered with one clear goal: to make human rights knowledge accessible in the Igala language. The campaign began with one vibrant in-person session, where participants met face-to-face, asked questions, learned together, and discovered the power of contributing to Wikimedia projects. The room was filled with curiosity, teamwork, and a strong sense of purpose.

To keep the momentum going, the community held three engaging online sessions, making it possible for more Igala speakers to join from different locations. During these sessions, participants worked collectively to translate human rights-related articles from English Wikipedia to Igala Wikipedia.

Beyond Wikipedia, the campaign expanded its impact across other Wikimedia platforms. Contributors added structured data to Wikidata, uploaded photos and media related to human rights to Wikimedia Commons, and enriched Wikivoyage with relevant content that highlighted places and stories connected to human rights in the region.

By the end of the campaign, the Igala Community had not only created content but had also strengthened voices, built skills, and reaffirmed the belief that access to knowledge is itself a human right. The WikiForHuman Rights campaign left behind more than edits; it left a growing community committed to sharing knowledge, preserving language, and standing up for human rights through open collaboration.

Image

Imo Community by Emmanuel Obiajulu: In Imo State, the campaign documented environmental pollution from crude oil refining, deforestation, and poor waste management in Imo State – Highlight the human rights impact of these issues, including pollution-related health problems and lack of access to clean air and water. Empower affected communities through knowledge sharing and collaboration

The Imo State Network organised and implemented a series of activities and training during the 2025 WikiforHumanRight campaign. These events helped equip participants with the necessary skills to contribute effectively to the campaign. We were able to organise three major training sessions on editing Wikipedia, wiki common upload, creating Wikidata items, and description translation.

At the end, we had 23 editors who were able to achieve the following result. 1.65k total edits; 388 articles created, 1.31k articles edited, 545 common uploads, and 528 words added. The approach we used to maximize the impact of our achievement centered on a high engagement strategy. We implored continuous capacity building, performance based reward system, and constant technical support to participants. To maintain high morale and consistent contributions, our network introduced a reward system for the weekly highest contributors. 

This approach addressed the common challenge of volunteer burnout as it acknowledged individual excellence. Each week, participants who had the most edits were publicly recognized and rewarded with incentives, such as data stipends. This created a healthy competition that helped increase the volume of contributions during the campaign.

At the end, I can confidently say that the campaign successfully elevated the visibility and improved local content within the global Wikimedia ecosystem

Image

Rivers Community by Dr. Jeremiah Ugwulebo: Dr. Jeremiah Ugwulebo and Nubel Bariloe Benjamin championed the campaign in Rivers State. In Rivers State, our focus was on environmental injustice, highlighting how oil spills, gas flaring, toxic waste, and damaged waterways affect health, clean water access, and the livelihoods of fishing communities. Contributors linked these environmental hazards to broader human rights and socio-economic challenges across the Niger Delta.

The campaign had both online and hybrid training, hands-on editing, and strong mentorship for both new and experienced editors. Participants contributed across Wikipedia, Wikidata, and Wikimedia Commons, creating and improving content that reflects real human rights experiences in Rivers State and beyond.

The campaign produced meaningful contributions and built editor confidence. It also highlighted lessons around coordination, connectivity, and onboarding new contributors. These insights are now guiding how the community plans mentorship, collaboration, and future human rights–focused Wikimedia activities.

Reflection and Learning: The December 19th Campaign Review session

As the campaign concluded, organizers and volunteers from across Nigeria gathered virtually on December 19th for a final review session which was led by the project coordinator Rukayat Ajeigbe, The session provided space to reflect on how effectively the campaign documented the Right to a Healthy Environment and to assess lessons learned from the campaign, National coordinators Kemi Makinde and Agnes Abah, opened the session by sharing their experiences both the good and the bad and what they can do better, the review session also welcomed the contributions of Euphemia Uwandu, Program Officer for Campaign Programs at the Wikimedia Foundation. Euphemia, who also serves as the Coordinator and general overseer of the WikiForHumanRights Campaigns, set the tone by providing a brief introduction to the campaign, highlighting its aims, objectives, and goals, and encouraging participants to contribute and, also gave a candid observation about the campaign and how glad she is to see that this year editors were allowed to share the views about the campaign and how we can learn from the campaign moving forward. The Community leads also shared their experiences and learnings in leading the campaign, so trainers and reviewers were able to share their reviews. The session encouraged editors to share their learnings and their takeaways from the overall experience.

Image

Achievements: By the Numbers

The 2025 campaign recorded strong growth in both participation and content output through the localized training and community-driven coordination.

Community engagement
Over 300 contributors, including new and returning editors, participated across the seven regional networks.

Content creation
More than 400 Wikipedia articles were created or improved, alongside over 500 Wikidata items created, documenting environmental and human rights topics.

Language diversity
To advance knowledge equity, content was translated into Igbo, Yoruba, Hausa, and Igala, making human rights information accessible in the languages people speak at home.

Visual documentation
High-quality media files were uploaded to Wikimedia Commons, visually documenting erosion sites, pollution, environmental damage, and local sustainability efforts.

The Journey So Far and What Next

WikiForHumanRights 2025 in Nigeria demonstrated the power of coordinated community action in documenting environmental injustice and human rights. By centering local voices, languages, and lived experiences, Nigerian Wikimedia communities contributed knowledge that resonates far beyond national borders.

Looking ahead, the journey continues with a focus on sustaining editor engagement, expanding language-based contributions, deepening partnerships with mission-aligned civil society organisations and academic institutions, and strengthening year-round documentation of human rights and environmental sustainability.

Through these efforts, Nigerian communities remain committed to ensuring that free knowledge reflects the realities of people’s lives today and for the future.

Image

One year ago, the Wikimedia Foundation reported a significant increase in bot traffic to the Wikimedia projects, largely coming from crawlers who extract content to train generative AI systems. We shared about the impact of these crawlers, and introduced our action plan to ensure a fairer use of our resources. Let’s take a look at the progress we’ve made on protecting our infrastructure, what we’ve learned along the way, and next steps.

Recap: High demand, increased strain, less visibility

As generative AI increasingly draws from high-quality, human-created content, automated traffic has risen sharply on Wikimedia sites. While Wikimedia content is free, the infrastructure that serves it is not. Crawlers tend to access every part of the Wikimedia ecosystem – articles, media files, and developer platforms – exposing risks of overloading the systems and impacting the experience of our readers and contributors. At the same time, LLM-powered features such as search summaries or chat bots are making it less likely that users know the source of information or follow links, as recent studies have found. Across the web, publishers are seeing more bot traffic and fewer human users – a trend we’re also observing. This creates an imbalance: increased extraction of content, with fewer people contributing back to sustain it. 

What does an open access model look like when so many don’t play by the rules? What do we need to change in order to enable – and enforce – sustainable use of our infrastructure? These and other questions have shaped our approach. Rather than asking, “How can we prevent reuse?” we’ve been thinking about ways to enable sustainable, responsible reuse

Prioritizing access for humans and mission-oriented traffic

We ensure fair usage by prioritizing access for our readers, content and technical contributors, blocking abusive traffic, and requesting companies who want to access our data at scale to use our Wikimedia Enterprise services that are designed for high volume use cases, instead of scraping pages or overusing community resources. 

To achieve this, we have updated our robot policy to set expectations, improved our bot detection and defense tools, and are investing in our API infrastructure to enable central management, improved governance and developer experience for our preferred ways of access. 

Readers, contributors, responsible bots, and abusive bots all share the same access points to our websites and infrastructure. We have therefore orchestrated our work with maximum care to minimize impact on our reading and editing community, with the ultimate goal of not impeding any person from accessing our projects. 

As a result of this work, we’re currently blocking or throttling about 25% of all automated requests that are coming from crawlers that don’t adhere to our policies (up to billions of requests per day). As we continue to improve our detection mechanisms, we expect this number to increase. Earlier this month, we also began rolling out global rate limits for API traffic, with a second rollout phase planned for April 2026

Image

Both crawling the site and using the APIs are still possible for anyone within the limits of the robot policy. Scraping at higher rates is generally restricted. Obtaining higher rate limits for the APIs, however, is easily possible and a preferred way of access. The rule of thumb is: The stronger the provided identification, the higher the provided limit. As we aim to minimize impact on our technical community, multiple options exist for technical contributors to identify their bots and tools and receive higher limits if needed. Bot owners who are unsure how to get the access they need can contact the Wikimedia Foundation.   

Good bot, bad bot, human? Differentiating legitimate users from abusive bots

A prerequisite to prioritizing access for humans and mission-oriented traffic and preventing abuse is the ability to differentiate legitimate users (bots and humans alike) from abusive bots. In the past, abusive bots were fewer in number, and easier to identify. And traditional web crawlers like search engine bots followed best practices: slowing down if the server started returning errors, and making efforts to be easily identified in server logs. They also brought visitors back to the sites, by indexing and showing pages in search results, so everyone benefited. In addition, the Wikimedia communities rely on their own bots and bespoke tools to support and speed up workflows from content creation to vandalism patrols. 

This new generation of bots, however, routinely ignore historical precedent and behave badly: sending requests as fast as they can, spoofing identities of real web browsers, and circumventing rate limits. Thinking about bots as adversarial was a new experience for us, and forced us to do many iterations of improving our bot detection.

Bots that cover their tracks: A predatory business model

Many modern bots operate outside of the established rules of the Internet, ignoring limits imposed by site owners, extracting data as fast as possible with no regard for the health of the host websites. In response, website operators have started to impose stricter rate limits on requests coming from datacenters and individual sources. But as a consequence of that, crawling operators have resorted to using a shady network of so-called “residential proxies” — companies that sell access to people’s own home or mobile connections, to hide their data extraction among legitimate browsing traffic. In this new world, there is little a website operator can do to stop the flood, as these networks can span hundreds of millions of IP addresses, without identifying human users one way or another. You might have noticed that, on a lot of websites, you’re now requested to “verify you’re a human” before being allowed access; these networks are the most likely cause of this shift in behaviour, and why community-centric knowledge sites like ours (and OpenStreetMap) try their best to do the same while respecting our users’ right to not be tracked extensively. 

Looking ahead: Responding to threats and exploring the opportunities of reuse

Over the coming months, we aim to further improve our detection systems to scale for rapidly changing bot behavior (such as residential proxies); continue to roll out and fine-tune API rate limits, and invest in our API infrastructure. This includes completing work on a dedicated Attribution API to make it easy for reusers to provide pathways for discovery. We also started working on improving our media infrastructure, aiming to make the platform more resilient when extensive scraping occurs. 

As we’re planning the next phase of this work, we’re also looking at opportunities beyond protecting our infrastructure: we want to explore ways to ensure that content reuse is sustainable long-term, including helping drive contributors back, and to further improve the discoverability and developer experience around the APIs, our preferred channels for access. 

While more work is needed to complete this initiative and protect against new forms of abuse, we have made great progress so far. This would not have been possible without the support of our amazing technical community – many thanks to everyone who has updated their code to follow updated best practices, gave feedback, asked questions, helped fellow developers, or reported bugs! 

We will continue to share about this work on mailing lists and blog posts – stay tuned for the next update!

Key challenges for a new Grantmaking Strategy

Thursday, 26 March 2026 19:42 UTC

The Global Resource Distribution Committee has identified key challenges faced by the Wikimedia Movement’s grantmaking system. This selection is based on feedback gathered from communities, affiliates, Regional Fund Committees, the Affiliations Committee, the Ecosystem of Wikimedia Organizations pilot, and the Wikimedia Foundation. The document reflects the main themes across these conversations.

With these challenges we are laying the ground for the new Grantmaking Strategy. Which problems could this strategy address? We have started to think about strategic responses to these challenges. We will share them as soon as we have a version ready for review.

This document identifies challenges to focus our work in the new strategy.  It is possible that the new strategy will not address all these challenges. Some are more urgent than others, and some are simpler to resolve than others.

From this point, we are focusing our work on three strategic challenges. We have identified two operational challenges as well, that we aim to address once the approach for the strategic challenges is clear.

Strategic challenges

Operational Challenges

The GRDC welcomes ideas on how to address any of these challenges. Please share them on the respective discussion pages.

Wikimedia Tunisia workshop at the Metlaoui Public Library.
Wikimedia Tunisia workshop at the Metlaoui Public Library.

“The power of Wikipedia in shaping people’s awareness is so important because even the smallest bit of research can make a difference.” -Jamilah Thomas, senior, East Carolina University

Consider the number 14,809 — the number of articles under the scope of WikiProject Indigenous peoples of North America that seeks to “encompass all current, historic, ethnic, legal, and cultural aspects of the many groups collectively described as Indigenous peoples of North America.” For the English Wikipedia, this represents only 0.21% of all articles on the English Wikipedia. Even the creation of a single article in an underrepresented topic area can set off ripples of impact, which is why, as one of Wiki Education’s Wikipedia Experts, I was beaming with curiosity and excitement when I saw the article that East Carolina University student editor Jamilah Thomas created as part of her Wikipedia assignment

Aware of the lack of representation in Hollywood and on Wikipedia, Thomas was inspired to create the article on the early 20th century Native American film organization, War Paint Club.

But her journey to the War Paint Club was not so simple. A senior majoring in English with a minor in Film Studies, Thomas shared she initially wanted to create the biography article for Native American actress White Bird. Like many editors that attempt to write about underrepresented topics on Wikipedia, Thomas ran into an unfortunate but all too familiar roadblock, one that I witness editors run into time and time again, and that I myself have encountered when editing — not enough published reliable secondary sources on the topic (in this case, White Bird).

Jamilah Thomas
Jamilah Thomas. Image courtesy Jamilah Thomas, all rights reserved.

“I found myself wanting to know more about White Bird as a person besides her contributions,” explained Thomas. “But I decided I could only focus on the War Paint Club due to the lack of sources found. I asked myself questions like who created the organization? Why was it formed?”

For Thomas, creating the War Paint Club article was key to shedding light on the history of Native Americans in Hollywood and the figures that supported this community. 

“I hope that readers understand how this club was important for Native American actors in early Hollywood,” said Thomas. “White Bird, a major founder, did so much for the community at the very beginning.”

Thomas understood the impact she made by filling in this content gap of Native American film history. I was impressed that even though she was unable to create the biography on White Bird, she quickly hatched a plan to plant the seed for others in the community to develop this little corner of the encyclopedia. 

When reflecting on her editing experience, Thomas spoke like a true Wikipedian. 

“The power of Wikipedia in shaping people’s awareness is so important because even the smallest bit of research can make a difference,” said Thomas. “Just by adding the War Paint Club to Wikipedia, I have now also introduced White Bird as well. Even if we have come so far from early Hollywood representation, we still have a long way to go and the War Paint Club proves that these discussions should still be had.”

Throughout the project, Thomas honed her skills in research, time management, and summarizing information in her own words. The student editor shared how this preparation aligned well with her future goals of becoming an archivist, especially when it came to the research portion of the project.

Like many of the student editors we work with, Thomas at first felt insecure and a bit worried about sharing her work to the public facing platform. But her feelings quickly changed after getting over that hurdle and mustering the courage to press publish. 

”This project ended up being more fun than I thought it would be,” said Thomas. “The feeling that I contributed to adding a topic that once didn’t exist on Wikipedia was wonderful!” 

Even though the class assignment is over, Thomas is already thinking about diving into research mode once again to improve Wikipedia’s coverage on White Bird and the War Paint Club, and my Wikipedian heart couldn’t be happier to hear this!


Interested in incorporating a Wikipedia assignment into your course or know an instructor who may be interested? Visit teach.wikiedu.org to learn more about the free resources, digital tools, and staff support that Wiki Education offers to postsecondary instructors in the United States and Canada.

The Wikimedia Foundation, and by extension the Affiliations Committee, are extending the pause on new affiliate recognition that started in August 2025, from March 31, 2026, to September 1, 2026. This additional time will allow us both to incorporate the extensive feedback we received into a new model of affiliation with clear guidelines on the purpose, expectations, and support offered to all recognized affiliates. This extension doesn’t affect existing affiliates, who continue under the current agreements.

Since the pause was announced, we have made significant progress through community consultations and AffCom’s work in better understanding the current and future needs of movement organizations. The draft proposal for a Future Affiliate Landscape by the Ecosystem pilot focus group, informed by AffCom’s November 2025 recommendations, highlights the limitations of the current affiliate model and suggests several improvements that we want to incorporate. At the same time, the Global Resource Distribution Committee is working on a refreshed Grantmaking Strategy, and we want to better align the affiliate support guidelines with the funding support guidelines.

This extension gives us the time to complete this work, ensuring that affiliate recognition resumes with clear expectations and a framework to better support the Movement Organizations as they exist today. It also allows us to share a proposal by Wikimania, to gather feedback and improve it before resuming affiliate recognition in September.

Until September 1, 2026:

  • The Wikimedia Foundation will continue the pause in signing new affiliate agreements.
  • New affiliate applications will not be reviewed, but we encourage interested groups to reach out to AffCom and share your intentions.
  • Existing affiliates will continue under their current agreements without interruption.

We recognize that this extension may affect groups preparing to request recognition. We value the work you are already doing and encourage you to continue; your community activities remain just as important. Until the Affiliate Ecosystem is finalized in September and new affiliation applications criteria are established, please follow affiliate governance best practices and the Affiliate Health Criteria. As part of your preparation, we invite you to review the Future Affiliate Landscape proposal and share your feedback there.

Thank you. 

Chen Almog
Community Operations Manager, Wikimedia Foundation

This announcement has been also posted on the Wikimedia movement affiliates noticeboard.

Screenshot of Wikimedia user group logo template

Wikiesfera + GLAMs: Care and Joy

Thursday, 26 March 2026 09:00 UTC
Image
Edit-a-thon on researchers at Spain’s National Archive, organized by Wikiesfera in 2024.
Photo by PatriHorrillo.

Last month, we in the project Wiki and GLAM : Harnessing Knowledge to Foster Gender Equality talked about the partnership between the Belém Library and the Wiki Editoras Lx in Portugal. Now we are moving east on the Iberian Peninsula to speak with Encina Villanueva. Encina the GLAM officer at Wikiesfera, a user group officially recognized by the Affiliations Committee of the Wikimedia Foundation in 2018. A feminist community of editors based in Spain, Wikiesfera is dedicated to addressing knowledge gaps within Wikimedia by promoting the participation of underrepresented groups. It organizes initiatives and events around four lines of work: gender, culture, memory, and equity. Within the culture axis, they lead numerous activities in partnership with museums, archives, and other cultural institutions, such as the Teatro de la Zarzuela (Zarzuela Theater).

Around ten years ago, Encina left the NGO sector where she had been working for more than a decade. She got involved with Wikiesfera, which was still in its early days, and became fascinated by their work: with the care they put into everything, from how content was chosen to the way editors treat each other. She found Wikiesfera gatherings joyful; she felt welcome and has stayed with the group ever since. Over the last three years, the group has received funding from the Wikimedia Foundation, which has allowed her to dedicate a few hours a week professionally to Wikimedia projects, besides her volunteer contributions.  

Wikiesfera had already collaborated with archives and libraries when it started its museum partnerships. In the beginning these were small one-time events at the Museo del Traje and the Museo Nacional de Artes Decorativas in Madrid, as well as the Museo Sefardí in Toledo, public museums managed by Spain’s Ministry of Culture. From these experiences, they broadened their scope by reaching out directly to Spain’s Subdirección General de Museos Estatales (General Sub-Directorate for State Museums), which helped to develop more GLAM collaborations. Like what we saw with the WELx and the Belém Libraries in our last post, these partnerships are two-way: the GLAM institution contributes with their information and expertise, the editors rely on their experience to decide what makes sense as Wikimedia content. Often, Wikiesfera organizes an edit-a-thon at the institution premises – events that are rewarding for both sides when partners are ready and well prepared. To locate potential partners, Encina suggests looking around and paying attention to gender-oriented projects led by GLAM institutions in their area. They may already have interesting material ready to be published.

She also suggests widening the focus. For example, when partnering with a museum we don’t need to talk just about artists: there are curators, researchers, educators, and other professionals with major roles in GLAMs. Consider Wikiesfera’s collaboration with the Ministry of Culture’s Archives, which was based on the work of women archivists and not women represented in the archives. Remember that women are predominant in the workforce in museums, libraries, archives, and other cultural institutions, representing 55.7% of professionals (2019 data). Museums, indeed, tend to be highly feminized workplaces – in 2021, 72.68% of the specialist workforce in Spain was made up of women.

Image
Teatro de la Zarzuela, Madrid, 2008.
Photo by Andreas Praefcke.

Learning from Wikiesfera, it’s clear that we should think of a broader definition of GLAM, one that considers other cultural and heritage organizations. One example is the Zarzuela Theater, which contacted the group because they realized that many women who were relevant in the history of Zarzuela, the Spanish lyric-dramatic genre, did not have an article on Wikipedia, even though the theater’s archive contained a lot of information about them. Wikiesfera and the theater then worked together to launch the project Mujeres de Zarzuela (Women of Zarzuela) in 2025. This collaboration went even further, as the theater reached out to universities with music departments to invite and train students interested in creating, translating, and improving articles. They designed a four-session online course to teach about Wikimedia, as well as about Zarzuela and the women who have been part of this musical heritage. This project will continue in 2026, but this time the training will be offered on WikiLearn. This highlights another aspect that is important to Encina: offering something interesting and fun to volunteers. When organizing edit-a-thons, the group usually invites a guest speaker, offers snacks, and even tries to provide a small gift to participants, such as a book. As we know, socializing is an important aspect of these in-person events.

As other Wikimedians working with GLAMs have told us, Encina identified two major challenges in these projects. The first is ensuring long-lasting and sustainable partnerships, and the second is obtaining image clearance from museums. Wikiesfera has addressed this challenge by collaborating with institutions such as Biblioteca Nacional de España (the Spanish National Library, BNE), which has released images on Wikimedia Commons to illustrate biographies within the Mujeres de Zarzuela project.  

Image
Celebrating Wikiesfera’s 10th anniversary in 2024. Photo by Medialab Matadero.

There is one thing Encina mentioned that made us think: she pointed out that most participants who show up at the events they organize with GLAM institutions are women over 40. At least when it comes to museums, that makes sense. Recent research carried out by SENTOMUS, based on data from more than 150 museums, found that almost 68% of museum visitors were aged 46 and older, and that more than 60% of museum visitors are women or non-binary people. We know that more than 80% of Wikimedians are men (2020 data) – GLAM partnerships can be a great way to reach out to a more diverse pool of Wikimedia collaborators, at least in terms of gender.

Lessons learned:

  • Make it fun for participants! Think of ways to give back to volunteers: invite a guest speaker, offer snacks, try to provide a small gift.
  • Broaden your understanding of GLAM institutions: theaters, for example, may have archives.
  • Think about the wider institutional context and about who works behind the scenes: curators, set designers, archivists, and other professionals.  
  • Look around: see whether there are GLAM institutions already leading gender-related initiatives in your area.

A buggy history

Thursday, 26 March 2026 03:50 UTC
—I suppose you are an entomologist?—I said with a note of interrogation.
—Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name! A society may call itself an Entomological Society, but the man who arrogates such a broad title as that to himself, in the present state of science, is a pretender, sir, a dilettante, an impostor! No man can be truly called an entomologist, sir; the subject is too vast for any single human intelligence to grasp.
The Poet at the Breakfast Table (1872) by Oliver Wendell Holmes, Sr. 
Image
A collection of biographies
with surprising gaps (ex. A.D. Imms)
The history of Indian interest in insects has been approached by many writers and there are several bits and pieces available in journals and various insights distributed across books. There are numerous ways of looking at how people viewed insects over time. One of these (cover picture on right) is a collection of biographies, some of which are uncited verbatim accounts from obituaries (and not even within quotation marks). This collation is by B.R. Subba Rao who also provides a few historical threads to tie together the biographies. Keeping Indian expectations in view, both Subba Rao and the agricultural entomologist M.A. Husain play to the crowd in their early histories. Husain wrote in pre-Independence times where there was a need for Indians to assert themselves before their colonial masters. They begin with mentions of insects in ancient Indian texts and as can be expected there are mentions of honey, shellac, bees, ants, and a few nuisance insects. Husain takes the fact that the term Satpada षट्पद or six-legs existed in the 1st century Amarakosa to make the claim that Indians were far ahead of time because Latreille's Hexapoda, the supposed analogy, was proposed only in 1825. Such one-upmanship (or quests for past superiority in the face of current backwardness?) misses the fact that science is not just about terms but  also about structures and one can only assume that these authors failed to find the development of such structures in the ancient texts that they examined. Cedric Dover, with his part-Indian and British ancestry, interestingly, also notes the Sanskrit literature but declares that he is not competent enough to examine the subject carefully. The identification of species in old texts also leave one wondering about the accuracy of translations. For instance K.N. Dave translates a verse from the Atharva-veda and suggests an early date for knowledge on shellac. Dave's work has been re-examined by an entomologist, Mahdihassan. Another organism known in ancient texts as the indragopa (Indra's cowherd) supposedly appears after the rains. Some Sanskrit scholars have, remarkably enough, identified it, with a confidence that no coccidologist ever had, as the cochineal insect (the species Dactylopius coccus is South American!), while others identify it as a lac insect, a firefly(!) or as Trombidium (red velvet mites) - the last for matching blood red colour mentioned in a text attributed to Susrutha. To be fair, ambiguities in translation are not limited to those dealing with Indian writing. Dikairon (Δικαιρον), supposedly a highly-valued and potent poison from India was mentioned in the work Indika by Ctesias 398 - 397 BC. One writer said it was the droppings of a bird. Valentine Ball thought it was derived from a scarab beetle. Jeffrey Lockwood claimed that it came from the rove beetles Paederus sp. And finally a Spanish scholar states that all this was a gross misunderstanding and that Dikairon was not a poison, and - believe it or not - was a masticated mix of betel leaves, arecanut, and lime! 
 
One gets a far more reliable idea of ancient knowledge and traditions from practitioners, forest dwellers, the traditional honey-harvesting tribes, and similar people that have been gathering materials such as shellac and beeswax. Unfortunately, many of these traditions and their practitioners are threatened by modern laws, economics, and cultural prejudice. These practitioners are being driven out of the forests where they live, and their knowledge was hardly ever captured in writing. The writers of the ancient Sanskrit texts were probably associated with temple-towns and other semi-urban clusters and it seems like the knowledge of forest dwellers was never considered merit-worthy by the book writing class of that period.

A more meaningful overview of entomology may be gained by reading and synthesizing a large number of historical bits, and there are a growing number of such pieces. A 1973 book published by the Annual Reviews Inc. should be of some interest. I have appended a selection of sources that are useful in piecing together a historic view of entomology in India. It helps however to have a broad skeleton on which to attach these bits and minutiae. Here, there are truly verbose and terminology-filled systems developed by historians of science (for example, see ANT). I prefer an approach that is free of a jargon overload or the need to cite French intellectuals. The growth of entomology can be examined along three lines - cataloguing - the collection of artefacts and the assignment of names, communication and vocabulary-building - social actions involving the formation of groups of interested people who work together building common structure with the aid of fixing records in journals often managed beyond individual lifetimes by scholarly societies, and pattern-finding a stage when hypotheses are made, and predictions tested. I like to think that anyone learning entomology also goes through these activities, often in this sequence. Professionalization makes it easier for people to get to the later stages. This process is aided by having comprehensive texts, keys, identification guides and manuals, systems of collections and curators. The skills involved in the production - ways to prepare specimens, observe, illustrate, or describe are often not captured by the books themselves and that is where institutions play (or ought to play) an important role.

Cataloguing

The cataloguing phase of knowledge gathering, especially of the (larger and more conspicuous) insect species of India grew rapidly thanks to the craze for natural history cabinets of the wealthy (made socially meritorious by the idea that appreciating the works of the Creator was as good as attending church)  in Britain and Europe and their ability to tap into networks of collectors working within the colonial enterprise. The cataloguing phase can be divided into the non-scientific cabinet-of-curiosity style especially followed before Darwin and the more scientific forms. The idea that insects could be preserved by drying and kept for reference by pinning, [See Barnard 2018] the system of binomial names, the idea of designating type specimens that could be inspected by anyone describing new species, the system of priority in assigning names were some of the innovations and cultural rules created to aid cataloguing. These rules were enforced by scholarly societies, their members (which would later lead to such things as codes of nomenclature suggested by rule makers like Strickland, now dealt with by committees that oversee the  ICZN Code) and their journals. It would be wrong to assume that the cataloguing phase is purely historic and no longer needed. It is a phase that is constantly involved in the creation of new knowledge. Labels, catalogues, and referencing whether in science or librarianship are essential for all subsequent work to be discovered and are essential to science based on building on the work of others, climbing the shoulders of giants to see further. Cataloguing was probably what the physicists derided as "stamp-collecting".

Communication and vocabulary building

The other phase involves social activities, the creation of specialist language, groups, and "culture". The methods and tools adopted by specialists also helps in producing associations and the identification of boundaries that could spawn new associations. The formation of groups of people based on interests is something that ethnographers and sociologists have examined in the context of science. Textbooks, taxonomic monographs, and major syntheses also help in building community - they make it possible for new entrants to rapidly move on to joining the earlier formed groups of experts. Whereas some of the early learned societies were spawned by people with wealth and leisure, some of the later societies have had other economic forces in their support.

Like species, interest groups too specialize and split to cover more specific niches, such as those that deal with applied areas such as agriculture, medicine, veterinary science and forensics. There can also be interest in behaviour, and evolution which, though having applications, are often do not find economic support.

Pattern finding

The pattern finding phase when reached allows a field to become professional - with paid services offered by practitioners. It is the phase in which science flexes its muscle, specialists gain social status, and are able to make livelihoods out of their interest. Lefroy (1904) cites economic entomology in India as beginning with E.C. Cotes [Cotes' career in entomology was cut short by his marriage to the famous Canadian journalist Sara Duncan in 1889 and he shifted to writing] in the Indian Museum in 1888. But he surprisingly does not mention any earlier attempts, and one finds that Edward Balfour, that encyclopaedic-surgeon of Madras collated a list of insect pests in 1887 and drew inspiration from Eleanor Ormerod who hints at the idea of getting government support, noting that it would cost very little given that she herself worked with no remuneration to provide a service for agriculture in England. Her letters were also forwarded to the Secretary of State for India and it is quite possible that Cotes' appointment was a direct result.

Image
Eleanor Ormerod, an unexpected influence
in the rise of economic entomology in India

As can be imagined, economics, society, and the way science is supported - royal patronage, family, state, "free markets", crowd-sourcing, or mixes of these - impact the way an individual or a field progresses. Entomology was among the first fields of zoology that managed to gain economic value with the possibility of paid employment. David Lack, who later became an influential ornithologist, was wisely guided by his father to pursue entomology as it was the only field of zoology with jobs. Lack however found his apprenticeship (in Germany, 1929!) involving pinning specimens "extremely boring".

Indian reflections on the history of entomology

A rather interesting analysis of Indian science is made by the first native Indian entomologist, with the official title of "entomologist" in the state of Mysore - K. Kunhikannan. Kunhikannan was deputed to pursue a Ph.D. at Stanford (for some unknown reason two pre-Independence Indian entomologists trained in Stanford rather than England - see postscript) through his superior Leslie Coleman. At Stanford, Kunhikannan gave a talk on Science in India. He noted in that 1923 talk :
In the field of natural sciences the Hindus did not make any progress. The classifications of animals and plants are very crude. It seems to me possible that this singular lack of interest in this branch of knowledge was due to the love of animal life. It is difficult for Westerners to realise how deep it is among Indians. The observant traveller will come across people trailing sugar as they walk along streets so that ants may have a supply, and there are priests in certain sects who veil that face while reading sacred books that they may avoid drawing in with their breath and killing any small unwary insects. [Note: Salim Ali expressed a similar view ]
Image
Kunhikannan died at the rather young age of 47

 

He then examines science sponsored by state institutions, by universities and then by individuals. About the last he writes:
Though I deal with it last it is the first in importance. Under it has to be included all the work done by individuals who are not in Government employment or who being government servants devote their leisure hours to science. A number of missionaries come under this category. They have done considerable work mainly in the natural sciences. There are also medical men who devote their leisure hours to science. The discovery of the transmission of malaria was made not during the course of Government work. These men have not received much encouragement for research or reward for research, but they deserve the highest praise., European officials in other walks of life have made signal contributions to science. The fascinating volumes of E. H. Aitken and Douglas Dewar are the result of observations made in the field of natural history in the course of official duties. Men like these have formed themselves into an association, and a journal is published by the Bombay Natural History Association[sic], in which valuable observations are recorded from time to time. That publication has been running for over a quarter of a century, and its volumes are a mine of interesting information with regard to the natural history of India.
This then is a brief survey of the work done in India. As you will see it is very little, regard being had to the extent of the country and the size of her population. I have tried to explain why Indians' contribution is as yet so little, how education has been defective and how opportunities have been few. Men do not go after scientific research when reward is so little and facilities so few. But there are those who will say that science must be pursued for its own sake. That view is narrow and does not take into account the origin and course of scientific research. Men began to pursue science for the sake of material progress. The Arab alchemists started chemistry in the hope of discovering a method of making gold. So it has been all along and even now in the 20th century the cry is often heard that scientific research is pursued with too little regard for its immediate usefulness to man. The passion for science for its own sake has developed largely as a result of the enormous growth of each of the sciences beyond the grasp of individual minds so that a division between pure and applied science has become necessary. The charge therefore that Indians have failed to pursue science for its own sake is not justified. Science flourishes where the application of its results makes possible the advancement of the individual and the community as a whole. It requires a leisured class free from anxieties of obtaining livelihood or capable of appreciating the value of scientific work. Such a class does not exist in India. The leisured classes in India are not yet educated sufficiently to honour scientific men.
It is interesting that leisure is noted as important for scientific advance. Edward Balfour, also commented that Indians were "too close to subsistence to reflect accurately on their environment!"  (apparently in The Vydian and the Hakim, what do they know of medicine? (1875) which unfortunately is not available online)

Kunhikannan may be among the few Indian scientists who dabbled in cultural history, and political theorizing. He wrote two rather interesting books The West (1927) and A Civilization at Bay (1931, posthumously published) which defended Indian cultural norms while also suggesting areas for reform. While reading these works one has to remind oneself that he was working under Europeans and may not have been able to discuss such topics with many Indians. An anonymous writer who penned a  prefatory memoir of his life in his posthumously published book notes that he was reserved and had only a small number of people to talk to outside of his professional work. Kunhikannan came from the Thiyya community which initially preferred English rule to that of natives but changed their mind in later times. Kunhikannan's beliefs also appear to follow the same trend.

Image
Entomologists meeting at Pusa in 1919
Third row: C.C. Ghosh (assistant entomologist), Ram Saran ("field man"), Gupta, P.V. Isaac, Y. Ramachandra Rao, Afzal Husain, Ojha, A. Haq
Second row: M. Zaharuddin, C.S. Misra, D. Naoroji, Harchand Singh, G.R. Dutt (Gobind Ram Dutt - Personal Assistant to the Imperial Entomologist. Studied several solitary wasps.), E.S. David (Entomological Assistant, United Provinces), K. Kunhi Kannan, Ramrao S. Kasergode (Assistant Professor of Entomology, Poona), J.L.Khare (lecturer in entomology, Nagpur), T.N. Jhaveri (assistant entomologist, Bombay), V.G.Deshpande, R. Madhavan Pillai (Entomological Assistant, Travancore), Patel, Ahmad Mujtaba (head fieldman), P.C. Sen
First row: Capt. Froilano de Mello, W Robertson-Brown (agricultural officer, NWFP), S. Higginbotham, C.M. Inglis, C.F.C. Beeson, Dr Lewis Henry Gough (entomologist in Egypt), Bainbrigge Fletcher, Charles A. Bentley (malariologist, Bengal), Senior-White, T.V. Rama Krishna Ayyar, C.M. Hutchinson, E. A. Andrews, H.L.Dutt


Image
Entomologists meeting at Pusa in 1923
Fifth row (standing) Mukerjee, G.D.Ojha, Bashir, Torabaz Khan, D.P. Singh
Fourth row (standing) M.O.T. Iyengar (a malariologist), R.N. Singh, S. Sultan Ahmad, G.D. Misra, Sharma, Ahmad Mujtaba, Mohammad Shaffi
Third row (standing) Rao Sahib Y Rama Chandra Rao, D Naoroji, G.R.Dutt, Rai Bahadur C.S. Misra, SCJ Bennett (bacteriologist, Muktesar), P.V. Isaac, T.M. Timoney, Harchand Singh, S.K.Sen
Second row (seated) Mr M. Afzal Husain, Major RWG Hingston, Dr C F C Beeson, T. Bainbrigge Fletcher, P.B. Richards, J.T. Edwards, Major J.A. Sinton
First row (seated) Rai Sahib PN Das (veterinary department Orissa), B B Bose, Ram Saran, R.V. Pillai, M.B. Menon, V.R. Phadke (veterinary college, Bombay)
 

Note: As usual, these notes are spin-offs from researching and writing Wikipedia entries. It is remarkable that even some people in high offices, such as P.V. Isaac, the last Imperial Entomologist, grandfather of noted writer Arundhati Roy, are largely unknown (except as the near-fictional Pappachi in Roy's God of Small Things)

Further reading
An index to entomologists who worked in India or described a significant number of species from India - with links to Wikipedia (where possible - the gap in coverage of entomologists in general is large)
(woefully incomplete - feel free to let me know of additional candidates)

Carl Linnaeus - Johan Christian Fabricius - Edward Donovan - John Gerard Koenig - John Obadiah Westwood - Frederick William Hope - George Alexander James Rothney - Thomas de Grey Walsingham - Henry John Elwes - Victor Motschulsky - Charles Swinhoe - John William Yerbury - Edward Yerbury Watson - Peter Cameron - Charles George Nurse - H.C. Tytler - Arthur Henry Eyre Mosse - W.H. Evans - Frederic Moore - John Henry Leech - Charles Augustus de Niceville - Thomas Nelson Annandale - R.C. WroughtonT.R.D. Bell - Francis Buchanan-Hamilton - James Wood-Mason - Frederic Charles Fraser  - R.W. Hingston - Auguste Forel - James Davidson - E.H. AitkenO.C. Ollenbach - Frank Hannyngton - Martin Ephraim Mosley - Hamilton J. Druce  - Thomas Vincent Campbell - Gilbert Edward James Nixon - Malcolm Cameron - G.F. Hampson - Martin Jacoby - W.F. Kirby - W.L. DistantC.T. Bingham - G.J. Arrow - Claude Morley - Malcolm Burr - Samarendra Maulik - Guy Marshall
 
 - C. Brooke Worth - Kumar Krishna - M.O.T. Iyengar - K. Kunhikannan - Cedric Dover

PS: Thanks to Prof C.A. Viraktamath, I became aware of a new book-  Gunathilagaraj, K.; Chitra, N.; Kuttalam, S.; Ramaraju, K. (2018). Dr. T.V. Ramakrishna Ayyar: The Entomologist. Coimbatore: Tamil Nadu Agricultural University. - this suggests that TVRA went to Stanford at the suggestion of Kunhikannan.

Feb-2025: See dedication to Ormerod in Maxwell-Lefroy's Indian Insect Pests (1906).

2025: Found a book called The British Foundation of Indian Entomology (2023) - by Michael Darby. Includes bits on Howlett, including his portrait, lifted straight out of Wikipedia - something that took several years until I discovered that portrait while browsing an obscure Indian agriculture periodical! 

    Introducing NeoWiki and More at MUDCon

    Thursday, 26 March 2026 00:00 UTC

    Discover NeoWiki, MediaWiki MCP, and Wikibase extensions. We recap our recent MUDCon talks.

    MUDCon (MediaWiki Users and Developers Conference) is a community conference that runs twice a year. We have been attending since the early days, when it was still called SMWCon. You can read our recap of the Fall 2024 edition in Vienna.

    Introducing NeoWiki (Spring 2026)

    At the most recent MUDCon, I introduced NeoWiki, a new open-source MediaWiki extension for structured data. I co-presented with Bernhard Krabina from Knowledge Management Associates.

    I have been maintaining Semantic MediaWiki for over 15 years and created large parts of Wikibase. Both are powerful, but they were designed in a different era. SMW is over 20 years old. Tools like Notion and Airtable have since set a usability standard that these older systems were never designed to match.

    NeoWiki applies the lessons we have learned. You get radically better usability and sustainability, with a focus on the features that deliver the most value rather than feature parity. That means native schemas with types and constraints, form-based editing, multiple subjects per page, JSON storage in MediaWiki revision slots, and a standard graph database for queries.

    NeoWiki is backed by ECHOLOT, an EU-funded cultural heritage consortium with 15 partners across 11 countries, and by Hallo Welt, the company behind BlueSpice. NeoWiki will power structured data in the next major version of BlueSpice. Professional Wiki is the product owner.

    You can try NeoWiki at neowiki.dev and find the source on GitHub. The slides are available online. We will embed the recording here once it is published.

    Follow NeoWiki on Mastodon, Bluesky, or X. Or visit the NeoWiki page to sign up for updates.

    MediaWiki MCP Server (Fall 2025)

    At MUDCon Fall 2025 in Hannover, I presented our MediaWiki MCP Server in a 20-minute talk. MCP (Model Context Protocol) is a standard that lets AI assistants interact with external tools and data sources. With our MCP Server, your AI assistant can read, create, edit, and delete wiki pages directly.

    In the demo I showed creating pages, building tables, and deleting pages. The MediaWiki MCP Server is open source and available on GitHub.

    Wikibase Extensions (Fall 2025)

    Also at MUDCon Fall 2025, I gave a 20-minute overview of the Wikibase extension ecosystem. If you use Wikibase, these 17 extensions can give you advanced search, data validation, local media support, date handling, and automated property values.

    We have a detailed write-up in our blog post Enhance Your Wikibase With Extensions.

    Gender, grave goods, and Anglo-Saxon Sussex

    Thursday, 26 March 2026 00:00 UTC

    On early Anglo-Saxon burials in Sussex where the grave goods don't match the osteological sex, and why the standard archaeological response of calling it an anomaly isn't good enough. Oh and I'm writing a paper.

    Wikis read-only: Datacenter Switchover

    Wednesday, 25 March 2026 15:09 UTC

    Mar 25, 15:09 UTC
    Completed - The scheduled maintenance has been completed.

    Mar 25, 15:00 UTC
    In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

    Mar 25, 13:41 UTC
    Scheduled - The SRE team will run a planned data center switchover, moving all wikis from our data center in Texas to the data center in Virginia. This is an important periodic test of our tools and procedures, to ensure the wikis will continue to be available even in the event of major technical issues.

    The switchover process requires a brief read-only period for all Foundation-hosted wikis, which will start at 15:00 UTC on Wednesday March 25th, and will last for a few minutes while we execute the migration as efficiently as possible. All our public and private wikis will be continuously available for reading as usual, but no one will be able to save edits during the process.

    Episode 204: Noam Cohen

    Tuesday, 24 March 2026 18:06 UTC

    🕑 1 hour 15 minutes

    Noam Cohen is a journalist and writer who has written extensively about Wikipedia since 2007 for publications including The New York Times and Wired. He is the author of the 2017 book The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball, which was just released in paperback form with a new introduction by the author.

    You can also see this interview in video form, on YouTube.

    Links for some of the topics discussed:

    Wikipedia:Administrators' newsletter/2026/4

    Tuesday, 24 March 2026 13:44 UTC

    News and updates for administrators from the past month (March 2026).

    Image

    Image Administrator changes

    added ·
    readded
    removed

    Image Guideline and policy news

    Image Technical news

    Image Arbitration

    Image Miscellaneous


    Archives
    2017: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2018: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2019: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2020: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2021: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2022: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2023: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2024: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2025: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
    2026: 01, 02, 03, 04


    <<  Previous Archive    —    Current Archive    —    Next Archive  >>

    ICIP &amp; IDSov Project Update

    Tuesday, 24 March 2026 12:00 UTC
    Progress on Guide development and Expert consultation
    , Belinda Spry.

    The Indigenous Cultural and Intellectual Property (ICIP) and Indigenous Data Sovereignty (IDSov) project led by Terri Janke and Company (TJC) is progressing well with the establishment of an Indigenous Expert Working Group (Working Group) to support this important work.

    Drawing on expertise across ICIP, Indigenous research, cultural governance, Indigenous Data Sovereignty (IDSov), archives and media, the Working Group provides independent cultural, strategic, and technical advice to the TJC team as they develop a draft ICIP and IDSov Guide for Wikimedia Australia. The group meets regularly and brings experience working with Indigenous knowledge, data, and content across sectors including archives, education, policy, and journalism.

    Wikimedia Australia is pleased to confirm the following members of the Expert Working Group:

    • Leonard Hill - Chief Executive Officer of Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS), bringing national leadership in Indigenous research, collections management, and cultural governance.
    • Dr Jessica Russ-Smith - Academic at Australian Catholic University with expertise in Indigenous research methodologies, governance, and policy.
    • Dr Kirsten Thorpe - Researcher at University of Technology Sydney specialising in Indigenous archives, digital collections, and community-led data governance.
    • Dr Tamika Worrell - Academic at Macquarie University whose work focuses on Indigenous data sovereignty, governance frameworks, and ethical research practice.
    • Yanti Ropeyarn - Master of Research in Arts Student at Macquarie University, Representative of Aboriginal and Torres Strait Islander Data Archive (ATSIDA) at University of Technology Sydney and member of the Executive Committee of Open Access Australasia, contributing expertise in Indigenous data stewardship and open access policy.
    • Miriam Corowa - Journalist with Australian Broadcasting Corporation (ABC) and SBS, bringing experience in Indigenous media representation and public communication.

    Some members of the Working Group will attend WikiCon Australia 2026 in Canberra, providing an opportunity for Wikimedia Australia’s editing community to meet with them and participate in discussions about the project, the draft Guide and broader implications.

    The WikiCon Canberra program highlights include:

    • Friday 10 April: Pre-conference First Nations panel at AIATSIS, open to both Wikimedians and the local GLAM community.
    • Saturday 11 April: Dr Janke’s keynote presentation on the draft ICIP & IDSov Guide and its key themes.
    • Sunday: 1.5-hour workshop with Dr Janke and her team to explore practical examples and gather community feedback.

    We invite Wikimedians to take part in a special workshop with Dr Janke and her team on Sunday, where you will have a rare opportunity to work through case studies that will directly shape the draft Guidelines and contribute to this important work.

    As an attendee to WikiCon, you will also gain an exclusive preview of the working document for the Guidelines and hear about its development from Dr Janke, her team and the members of the Working Group. The Guidelines will be circulated to all registered WikiCon participants in April and a full public draft alongside a White Paper will be released from July 2026.

    25 Years of Wikipedia

    Monday, 23 March 2026 10:42 UTC

    On 18 March, Wikimedia Europe and Wikimedia Belgium brought together a room full of people who care deeply about one of the internet’s quirkiest and perhaps most surprising achievements: 25 years of Wikipedia.

    Think about what that actually means. A non-commercial, volunteer-built encyclopedia in hundreds of languages, freely available to anyone with an internet connection. No paywalls. No shareholders. No algorithm deciding what you should read next. Around 260,000 volunteers curate 65 million entries, viewed more than 15 billion times every month. We don’t have nearly enough of that kind of thing.

    A room full of communities

    Image

    The evening had the feel of a reunion — familiar faces from across the European digital policy and Wikimedia communities. A mini-exhibition taking guests through the history of Wikipedia. A Wikicheese station where volunteers photographed Belgian cheeses to improve culinary content on the internet. These things only happen when people have been genuinely building something together long enough that the work creates its own history.

    Image

    Annie Rauwerda

    Image

    Stand-up Wikipedian and Depths of Wikipedia creator Annie Rauwerda took the stage and did what she does best — making the audience laugh while quietly reminding everyone why Wikipedia and free knowledge is beautiful. Her show was a love letter to trivia, knowledge, facts and humans. You may watch an older version of her talk online.

    Here’s to the next 25.

    One accent was the conscious decision to celebrate being human. Wikipedia is a project of humans and dedicated to human knowledge. With everything that this entails – good and bad. In a world full of machine generated content, the Wikimedia movement wants to confirm its commitment to being community human of volunteers first and foremost.

    And since we are human, you will also bear with us that we will plug the Wikipedia Test here. It is a policy tool that asks a simple question: does a proposed law harm Wikipedia? When a law harms Wikipedia, it likely harms other community-led, nonprofit digital spaces too – spaces that don’t sell ads, don’t harvest data, and exist purely in the public interest. If you’re a policymaker, it’s worth thinking about and using it.

    weeklyOSM 817

    Sunday, 22 March 2026 11:41 UTC

    12/03/2026-18/03/2026

    lead picture

    [1] OpenArdenneMap is an open-source map style designed for the production of topographic maps for printing | © juminet | map data © OpenStreetMap Contributors.

     

    Mapping

    • After passing through the proposal process and being approved the ETCS Markers Tagging Scheme, an effort to unify the tagging of the markers used by the European Rail Traffic Management System, is available for everyone to use. Previously these were implemented using country-dependent schemes. The proponents are asking the mappers of countries where such systems are used to update the relevant wiki pages to include a redirect or a section unified with the new tagging scheme.

    Community

    • In their latest OpenStreetMap interview series, OpenCage spoke with Martin Ždila of Freemap Slovakia, the Slovak local chapter of the OpenStreetMap Foundation.
    • The UN Maps team introduced its new community ambassadors, who plan different activities to bring OSM to local communities.
    • KelsonV commented that the latest Pedestrian Working Group’s crosswalk corner tagging scheme is better than the way he had been doing it, so going forward he will use that scheme instead.
    • Mateusz Konieczny has requested feedback for proposed preset changes in the iD tagging schema and shared a list of several currently being reviewed for potential inclusion.
    • IXVG47QZ reported that last year Javi and Rebecca planned to bike-pack from the Austrian Alps into Asia using CoMaps, an OpenStreetMap-based mobile navigation app. ‘It has safely taken me to many countries in Asia and now in Oceania’, said Javi.

    Local chapter news

    • Habi has vibe-coded Image a script that monitors the official Swiss municipal boundary data from swisstopo and compares it with the boundary data in OpenStreetMap. The script runs daily at 2 am UTC via GitHub Actions, and is accessible here.
    • Lyft, an American ride-sharing platform, has joined as the latest OSM US organisational member.

    OSM in action

    • Vasily Ivanov is developing Image a mobile-friendly bike route web map Image for the Ertlav Image cycling club. You can view other members’ routes and upload your own tracks and photos. OpenStreetMap is used as the base map and the map itself runs on MapLibre.
    • rbb24 used ImageImage an OpenStreetMap-based map to visualise the locations of cycle paths that will be closed for renovations in the Oberspreewald-Lausitz district, Brandenburg, Germany.

    Open Data

    • The UNIPLU-BR dataset is the first unified and standardised national database of point precipitation (non-interpolated) in Brazil, consolidating raw data for 40 years and from five official monitoring networks: CEMADEN, INMET, ANA (Hidroweb), Telemetria, and ICEA. The dataset is available on Zenodo.org/EU.

    Software

    • The March Organic Maps update report includes release notes about improvements related to conditional speed limits, more detailed contours for China, split/smaller Tanzania regions, leather shops, and more. According to the developers this update took more time due to hotfixes and Google Play review.
    • The project Geowiki provides a modular ecosystem for processing and visualising OpenStreetMap data, originally developed for OpenStreetBrowser. Its JavaScript library, geowiki-api, retrieves data via the Overpass API or OSM files, makes it usable in Leaflet, or exports it as GeoJSON, and can also act as an Overpass proxy server.
    • vizsim has developed Image Missing Mapillary GraphHopper Routing for Germany, a web application that plans routes along roads without Mapillary imagery. The tool combines OpenStreetMap data with Mapillary coverage, highlights missing segments through colour-coded routes, and uses ImageImage GraphHopper for routing.
    • Eugene published a report about the results of the OsmAnd 2026 user surveys that were conducted recently.
    • Zkir announced that UrbanEye3D version 2.0, a JOSM plugin for visualising OpenStreetMap’s 3D data, will be released at the end of March 2026.

    Releases

    • [1] Juminet, who has been developing their topographic style over nine years, has announced the release of OpenArdenneMap winter 2025–2026 version. OpenArdenneMap is open-source map style designed for the production of topographic maps for printing, available for use with QGIS and the Mapnik/cartoCSS libraries.

    OSM in the media

    • Jules Grandin, of Les Échos, explained the history of roundabouts ImageImage in France and tries to answer the question of how many roundabouts there are in France using OpenStreetMap data.
    • Ishaan Kocchar wrote, on Substack, about the triple axes of the ‘Digital Communities Trilemma’: openness, activity, and quality, in the context of OpenStreetMap and open data. Ishaan argued that the ‘big corporate consumers’ of the contributed data do not always provide any benefit to the OSM community or the project itself. They compared the Indian context of collaborative mapping with OSM with the local commercial market.

    Other “geo” things

    • Coordinate Mapper is a professional-grade geospatial tool for plotting, analysing, and exporting coordinate data in multiple systems, including WGS84 and the UK National Grid.
    • PGlite, a open-source project that allows you to run PostgreSQL locally in a browser, has added long-awaited support for the PostGIS extension. You can try it out in the browser or use it as an npm package.
    • The Instituto Geográfico Nacional (Spain) is offering ImageImage some courses on GIS and geoprocessing on its e-learning platform and using the OGC platform over 2026. The course about data management is open Image.

    Upcoming Events

    Country Where Venue What When
    OSMF Engineering Working Group meeting Image 2026-03-20
    flag Olomouc Přírodovědecká fakulta Univerzity Palackého Missing Maps Day Olomouc 2026 Image 2026-03-21
    flag Perímetro Urbano Yopal OSM video Encuentro virtual: Introducción a OpenStreetMap Image 2026-03-21
    flag Tiranë https://osmvideo.cloud68.co/user/ird-zqk-9vq-szt OpenStreetMap Virtual Meetup Tirana Image 2026-03-21
    flag Domplatz Fulda Frühlingsmapping 2026 Image 2026-03-22
    Missing Maps : Mapathon en ligne – CartONG [fr] Image 2026-03-23
    flag Bruxelles – Brussel ULB Solbosch Campus – Building U – UB4.126 Belgian Interuniversity Mapathon 2026 Image 2026-03-23
    flag Stadtgebiet Bremen Online und im Hackerspace Bremen Bremer Mappertreffen Image 2026-03-23
    flag Pôle Numérique Brest Iroise Rencontre OpenStreetMap et Territoires Image 2026-03-24
    flag Göttingen Uni Göttingen FOSSGIS-Konferenz 2026 Image 2026-03-24 – 2026-03-27
    flag Derby The Brunswick, Railway Terrace, Derby East Midlands pub meet-up Image 2026-03-24
    UN Mappers Mappy Hour: UN Maps Community Ambassador Pilot Initiative Image 2026-03-25
    flag Düsseldorf Online bei https://meet.jit.si/OSM-DUS-2026 Düsseldorfer OpenStreetMap-Treffen (online) Image 2026-03-27
    flag Göttingen Uni Göttingen, Fakultät für Geowissenschaften FOSSGIS 2026 – OSM-Samstag Image 2026-03-28
    flag Chemnitz Neues Hörsaalgebäude, TU Chemnitz Chemnitzer Linux-Tage 2026 Image 2026-03-28 – 2026-03-29
    Local Chapters & Communities Congress 2026 Image 2026-03-28
    flag Vélo Utile rencontre OSM Image 2026-03-28
    flag Mira-Bhayander DBT Café, Mira Road OSM Mumbai Mapping Party No.8 (Western Line – North) Image 2026-03-28
    flag Hannover Kuriosum OSM-Stammtisch Hannover Image 2026-03-30
    flag Saint-Étienne Zoomacom Rencontre Saint-Étienne et sud Loire Image 2026-03-30
    flag Stuttgart Stuttgart Stuttgarter OpenStreetMap-Treffen Image 2026-04-01
    flag Le Schmilblick, Montrouge Réunion des contributeurs de Montrouge et du Sud de Paris Image 2026-04-02
    flag नई दिल्ली Jitsi Meet (online) OSM India – Monthly Online Mapathon Image 2026-04-05

    Note:
    If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

    This weeklyOSM was produced by MatthiasMatthias, Raquel IVIDES DATA, Strubbl, Andrew Davidson, TrickyFoxy, barefootstache, derFred, izen57, mcliquid.
    We welcome link suggestions for the next issue via this form and look forward to your contributions.

    Rendering complex scripts in terminal and OSC 66

    Saturday, 21 March 2026 23:30 UTC

    As a programmer, I spend most of my time in a terminal application like Kitty. I use Neovim as my code editor. I use CLI based AI agents. But the biggest pain, even in 2026, is that there is no terminal that can render complex scripts like Indic languages or Arabic. This is a significant limitation for me, as most of my work involves language processing.

    In this article, I will give a brief overview of why this issue remains unsolved—covering the character-cell grid model, width measurement, and the distinction between text shaping and rendering—along with ongoing efforts and a small tool I built recently that illustrates a solution path.

    On a germ trail

    Saturday, 21 March 2026 09:52 UTC

    Hidden away in the little Himalayan town of Mukteshwar is a fascinating bit of science history. Cattle and livestock really mattered a lot in the pre-engine past, especially for transport and power,  on farms and in cities but also and especially for people in power. Hyder Ali and Tipu were famed and feared for their ability to move their guns rapidly, most famously, making use of bullocks, of the Amrut Mahal and Hallikar breeds. The subsequent British conquerors saw the value and maintained large numbers of them, at the Commissariat farm in Hunsur for instance.

    Image
    The Commissariat Farm, Hunsur
    Photo by Wiele & Klein, from: The Queen's Empire. A pictorial and descriptive record. Volume 2.
    Cassell and Co. London (1899). [p. 261]
    The original photo caption given below, while being racy, was most definitely inaccurate,
    these were not maintained for beef :

    BEEF   FOR   THE   BRITISH   ARMY.
    It is said that the Turkish soldier will live and fight upon a handful of dates and a cup of water, the Greek upon a few olives and a pound of bread—an excellent thing for the commissariats of the two armies concerned, no doubt! But though Turk and Greek will be satisfied with this Spartan fare, the British soldier will not—not if he can help it, that is to say. Sometimes he cannot help it, and then it is only just to him to admit that he bears himself at a pinch as a soldier should, and is satisfied with what he can get. But what the British soldier wants is beef, and plenty of it : and he is a wise and provident commander who will contrive that his men shall get what they want. Here we see that the Indian Government has realised this truth. The picture represents the great Commissariat Farm at Hunsur in Mysore, where the shapely long-horned bullocks are kept for the use of the army.
    Image
    Report of the cattle plague commission
    led by J.H.B. Hallen (1871)

    Imagine the situation when cattle die off in their millions - the estimated deaths of cows and buffaloes in 1870 was 1 million. Around 1871, it rang alarm bells high enough to have a committee examining the situation. Britain had had a major "cattle plague" outbreak in 1865 and so the matter was not unknown to the public. The generic term for the mass deaths was "murrain", a rather old-fashioned word that refers to an epidemic disease in sheep and cattle derived from the French word morine, or "pestilence," with roots in Latin mori "to die." A commission headed by Staff Veterinary Surgeon J.H.B. Hallen went across what would best be called the "cow belt" of India and noted among other things that the cattle in the hills were doing better and that rivers helped isolate the disease. Remarkably there were two little-known Indians members - Mirza Mahomed Ali Jan (a deputy collector) and Hem Chunder Kerr (a magistrate and collector). The report includes 6 maps with spots where the outbreaks occurred in each year from 1860 to 1866 and the spatial approach to epidemiology is dominant. This is perhaps unsurprising given that the work of John Snow would have been fresh in medical minds. One point in the report that caught my eye was "Increasing civilization, which means in India clearing of jungle, making of roads, extended agriculture, more communication with other parts, buying and selling, &c, provides greater facilities for the spread of contagious diseases of stock." The committee identified the largest number of deaths to be caused by rinderpest. Rinderpest has a very long history and the its attacks in Europe are quite well documented. There had been two veterinary congresses in Europe that looked at rinderpest. One of the early researchers was John Burdon Sanderson (a maternal grand-uncle of J.B.S. Haldane) who noted that the blood of infected cattle was capable of infecting others even before the source individual showed any symptoms of the disease. He also examined the relationship to smallpox and cowpox through cross-vaccination and examination for resistance. C.A. Spinage in his brilliant book (but with a European focus) on The Cattle Plague - A History (2003) notes that rinderpest belongs to the Paramyxoviruses, a morbillivirus which probably existed in Pleistocene Bovids and perhaps the first relative that jumped to humans was measles, and was associated with the domestication of cattle. The English believed that the origin of rinderpest lay in Russia. The Russians believed it came from the Mongols.
    Image
    Gods slaandehand over Nederland, door de pest-siekte onder het rund vee
    [God's lashing hand over the Netherlands, due to the plague disease among cattle]
    Woodcut by Jan Smits (1745) - cattle epidemics evoked theological explanations
    The British government made a grant of £5,000 in 1865 for research into rinderpest which was apparently the biggest ever investment in medical research upto that point of time. This was also a period when there was epidemic cholera epidemic, mainly affecting the working class, and it was noted that hardly any money was spent on it. (Spinage:328) The result of the rewards was that a very wide variety of cures were proffered and Spinage provides an amusing overview. One cure claim came from a Mr. M. Worms of Ceylon and involved garlic, onion, and asafoetida. Worms was somehow related to Baron Rothschild and the cure was apparently tested on some of Rothschild's cattle with some surprising recoveries. Inoculation as in small pox treatments were tried by many and they often resulted in infection and death of the animals.

    As for the India scene, it appears that the British government did not do much based on the Hallen committee report. There were attempts to regulate the movement of cattle but it seems that the idea that it could be prevented through inoculation or vaccination had to wait. In the 1865 outbreak in Britain, one of the control measures was the killing and destruction of infected cattle at the point of import. This finally brought an end to outbreaks in 1867. Several physicians in India tried experiments in inoculation. In India natural immunity was noted and animals that overcame the disease were valued by their owners. In India natural immunity was noted and animals that overcame the disease were valued by their owners. In 1890 Robert Koch was called into service in the Cape region on the suggestion of Dr J. Beck. In 1897 Koch announced that bile from infected animals could induce resistance on inoculation. Koch was then sent on to India to examine the plague leaving behind a William Kolle to continue experiments in a disused mine building at Kimberley belonging to the De Beers. Around the same time experiments were conducted by Herbert Watkins-Pitchford and Arnold Theiler who found that serum from cattle that recovered worked as an effective inoculation. They however failed to publish and received little credit. Koch, a German, beating the English researchers was a cause of hurt pride.

    Image
    The Brown Institution was destroyed in 1944
    by German bombing
    Interesting to see how much national pride was involved in all this. The French had established an Imperial Bacteriological Institute at Constantinople with Louis Pasteur as their leading light. This was mostly headed by Pasteur Institute Alumni. Maurice Nicolle and Adil-Bey were involved in rinderpest research. They demonstrated that the causal agent was small enough to pass through bacterial filters. In India, Alfred Lingard was chosen in 1890 to examine the problems of livestock diseases and to find solutions. Lingard had gained his research experience at the Brown Animal Sanatory Institution - whose workers included John Burdon Sanderson. About six years earlier, Robert Koch, a German, had caused more embarrassment to the British establishment by identifying the cholera causing bacteria in Calcutta. Koch had however not demonstrated that his bacteria isolate could cause disease in uninfected animals - thereby failing one of the required tests for causality that now goes by the name of Koch's postulates. There were several critiques by British researchers who had been working for a while on cholera in India - these included David Douglas Cunningham (who was also a keen naturalist and wrote a couple of general natural history books as well) and T.R. Lewis (who had spent some time with German researchers).  The British government (the bureaucrats were especially worried about quarantine measures for cholera and had a preference for old-fashioned miasma theories of disease) felt the need for a committee to examine the conflict between the English and German claims - and they presumably chose someone with a knowledge of German for it -  Emanuel Edward Klein assisted by Heneage Gibbes. Klein was also from the Brown Animal Sanatory Institution and had worked with Burdon Sanderson. Now Klein, the Brown Institution, Burdon Sanderson and many of the British physiologists had come under the attack of the anti-vivisection movement. During the court proceedings that examined claims of cruelty to animals by the anti-vivisectionists, Klein, an east European (of Jewish descent) with his poor knowledge of English had made rather shocking statements that served as fodder for some science fiction written in that period with evil characters bearing a close resemblance to Klein! Even Lingard had been accused of cruelty, feeding chickens with the lungs of tuberculosis patients, to examine if the disease could be transmitted. E.H. Hankin, the man behind the Ganges bacteriophages had also been associated with the vivisection-researchers and the British Indian press had even called him a vivisector who had escaped to India.

    Lingard initially worked in Pune but he found the climate unsatisfactory for working on anti-rinderpest sera. In 1893 he moved the laboratory in the then remote mountain town of Mukteshwar (or Muktesar as the British records have it) and his first lab burnt down in a fire. In 1897 Lingard invited Koch and others to visit and Koch's bile method was demonstrated. The institution, then given the grand name of Imperial Bacteriological Laboratory was rebuilt and it continues to exist as a unit of the Indian Veterinary Research Institute. Lingard was able to produce rinderpest serum in this facility - producing 468,853 doses between 1900 and 1905 and the mortality of inoculated cattle was as low as 0.43%. The institute grew to produce 1,388,560 doses by 1914-15. Remarkably, several countries joined hands in 1921 to attack rinderpest and other livestock diseases and it is claimed that rinderpest is now the second virus (after smallpox) to have been eradicated. The Muktesar institution and its surroundings were also greatly modified with dense plantations of deodar and other conifers. Today this quiet little village centered around a temple to Shiva is visited by waves of tourists and all along the route one can see the horrifying effects of land being converted for housing and apartments.


    Image
    The Imperial Bacteriological Laboratory c. 1912 (rebuilt after the fire)
    Image
    In 2019, the commemorative column can be seen.
    Image
    Upper corridor
    Image
    A large autoclave made by Manlove & Alliott, Nottingham.
    Image
    Stone marker
    Image
    A cold storage room built into the hillside
    Image
    Koch in 1897 at Muktesar
    Seated: Lingard, Koch, Pfeiffer, Gaffky

    Image
    The habitat c. 1910. One of the parasitologists, a Dr Bhalerao,
    described parasites from king cobras shot in the area.

    Image
    The crags behind the Mukteshwar institute, Chauli-ki-Jhali, a hole in a jutting sheet of rock (behind and not visible)
    is a local tourist attraction.
    Here then are portraits of three scientists who were tainted in the vivisection debate in Britain, but who were able to work in India without much trouble.
    Image
    E.H. Hankin

    Image
    Alfred Lingard

    Image
    Emanuel Edward Klein


    The cattle plague period coincides nicely with some of the largest reported numbers of Greater Adjutant storks and perhaps also a period when vultures prospered, feeding on the dead cattle. We have already seen that Hankin was quite interested in vultures. Cunningham notes the decline in adjutants in his Some Indian Friends and Acquaintances (1903). The anti-vivisection movement, like other minority British movements such as the vegetarian movement, found friends among many educated Indians, and we know of the participation of such people as Dr Pranjivan Mehta in it thanks to the work of the late Dr. S. R. Mehrotra. There was also an anti-vaccination movement, and we know it caused (and continues to cause) enough conflict in the case of humans but there appears to be little literature related to opposition to their use on livestock in India.

    Further reading
    Thanks are due to Dr Muthuchelvan and his colleague for an impromptu guided tour of IVRI, Mukteshwar.
    Postscripts:
    The Imperial Bacteriologist - Alfred Lingard in this case in 1906 - was apparently made "Conservator" for the "Muktesar Reserve Forest" and the 10 members of the "Muktesar Shikar Club" were given exemption from fees to shoot carnivores on their land in 1928. See National Archives of India document.
    Klein, Gibbes and D.D. Cunningham were also joined by H.V. Carter (who contributed illustrations to Gray's Anatomy - more here).
    28-1-2024: The Hebbal Serum Institute (another institution built during Leslie Coleman's tenure) was established in Bangalore around 1927 and produced two million doses of serum from 1927 to 1939.

    Each term, Wiki Education lays down yet another layer of foundation to support an idea that took shape more than 15 years ago: Namely, that students (with the right support) can make high quality contributions to Wikipedia and in doing so, leave their mark on the world’s largest open and free online encyclopedia. Fifteen years on, there’s much we can predict, term after term, but with a rapidly changing information landscape, the Wikipedia Student Program keeps us on our toes! Fall 2025 was, in many ways, a typical term, but it brought with it pivotal changes to our program, as we launched new guidance around AI and deployed the AI detection tool Pangram on student edits. Now, a few months removed, we’ve been able to reflect on what we learned from Fall 2025 and will continue to refine our program as we have always done.

    Fall 2025 in numbers

    In the Student Program, we continually stress quality over quantity, but when taken collectively, the numbers never fail to impress. Here’s what they looked like in Fall 2025

    • 343 courses participated in a Wikipedia assignment
    • 6,410 students enrolled on the Dashboard
    • Students added 5.03 million words across 6,250 Wikipedia articles
    • To support their work, students added 49,500 references to Wikipedia
    • Closing critical content gaps, students created 363 new articles

    Whether writing about Revolutions in Latin America, Insect Diversity, or Anthropological Theory, our students are making critical updates to Wikipedia in almost every field imaginable.    

    An assignment for our times

    In the face of AI and an increasingly complex information landscape, it might be tempting to view Wikipedia as outdated, a relic of the early internet. To the contrary, Wikipedia is more critical than ever, and our faculty recognize its ongoing value. As one faculty wrote, “I believe in freely available, high-quality information based on clear, concrete standards of evidence. How we know what we know is more important than ever in today’s age of misinformation and disinformation.” 

    The advent of AI has only added to an information landscape that was already buckling under the weight of mass disinformation campaigns. The Wikipedia assignment is not just another assignment. It offers students keen insight into the social infrastructure of knowledge. As another professor remarked, “There were numerous pedagogical benefits to the Wikipedia assignments in my course. Perhaps most notable was the focus on my students’ critical and productive engagement with the information infrastructure of the internet. Developing various writing, researching, and editing skills related to the community-based platforms of the Wikipedia universe encouraged my students, and me, to develop a greater understanding of the creation, dissemination, and potential for mis-information via other internet-based platforms as well.”

    In response to the growing prevalence of AI, we realized that our students needed guidance not simply on how to navigate AI use on Wikipedia but how to think about it more generally. As a result, we launched a new training module in the Fall as well as a more in-depth look at LLMs broadly speaking. As we engage with our faculty, we’re coming to learn that the Wikipedia assignment is not just a tool for helping students to develop digital and media literacy, but it can also play a critical role in developing AI literacy. As one professor described, “Wikipedia is superior to AI generated information in many ways, and by doing this assignment – my student learned this at a very foundational level. They could clearly see that in well-written Wikipedia articles, every fact is associated with at least one source that EXISTS, that is REAL, and VERIFIABLE. Anyone teaching AI literacy – should be teaching Wikipedia!”

    Sparking joy

    On a day to day basis, we often focus on the technical challenges of editing Wikipedia. Its policies can be  confusing, and its interface is often daunting to first time users. Despite its intricacies, students and faculty regularly express how proud they are of their contributions. As one professor noted, “I love teaching with Wikipedia and I am so proud that my students are able to address knowledge gaps about under-represented communities. I also love to see the pride they demonstrate in their work.” 

    More subtle and easier to miss is that many students truly “enjoy” the Wikipedia assignment. In the words of one student, “I can proudly say that I helped improve a Wikipedia page… I was given pure enjoyment doing the research and the work.” Another professor declared, “Not only did they enjoy the semester but I did too.” 

    The pride and joy our faculty and students experience is palpable, and only amplifies the pride and joy we feel at Wiki Education in having the honor of shepherding each cohort of students each term. Thank you to our Fall 2025 faculty and students!

    To learn more about teaching with Wikipedia, visit teach.wikiedu.org.

    Image credit: Solpugid, CC BY-SA 4.0, via Wikimedia Commons

    Wikimedia Hackathon Northwestern Europe 2026

    Wednesday, 18 March 2026 00:00 UTC

    Wikimedia Nederland organised a new type of event this year, the Wikimedia Hackathon Northwestern Europe 2026, which was held last weekend in Arnhem, the Netherlands. And I'm very happy they did, since unlike last years, I will unfortunately be missing from the "main" Wikimedia Hackathon (which is happening in Milan at the start of May).

    I continue to believe the primary reason for these events existing is the ability to connect with old and new friends in person. That being said, I did get a bit of technical tinkering done during the weekend as well. These include a dark mode fix to MediaWiki's notification interface, fixes to some visual bugs in MediaWiki's two-factor authentication and OAuth functionality. I did also get an older patch of mine about disabling Composer's new auditing functionality merged. And, as usual, I spent a bunch of time helping various people use with the various infrastructure pieces I'm familiar with (or at least had to suddenly get familiar with) and approved a bunch of OAuth consumers and other requests.

    We also managed to continue the tradition from the past two Wikimedia Hackathons of nominating more people to receive +2 access to mediawiki/*. That request is still open as of writing, as those have to run for at least a week, but looks very likely to pass at this point.

    Overall, the event was very well-organized: the venue was great, except that the number of stairs was described in a rather misleading way, food was great, and the atmosphere was amazing. The pressure that you must Just Get Things Done to justify your attendance that the main hackathon seems to have recently gained was clearly missing here which was great. Also, I will clearly need to bring more Finnish chocolate next time.

    The timing of Friday and Saturday works great for us with other things (like university for me) during the week, as it takes full advantage of the weekend but still only eats workdays from a single calendar week. My main gripe with the logistics was the focus on a single sketchy non-free messaging platform for all event-related communications with the IRC bridge used on the main hackathon channel notably missing.


    ps. Like Lucas, I do have Opinions about so many proudly mentioning they've used "vibe coding" tools during the introduction and showcase. Those opinions are best left for an another time, but I do want to note that all of my work and mistakes have still been lovingly handcrafted.

    TomWikiAssist, and the best block reason ever

    Wednesday, 18 March 2026 00:00 UTC

    tl;dr AI thing edits Wikipedia, gets reverted, doxxes its operator, files a civility complaint, and then writes an essay about it.

    As AI crawling and training continues to stress the web, the Wikimedia foundation continues to change various things in their edge rules and internal processes. Recently the Wikimedia Hackathon Northwestern Europe 2026 was likely one of the largest technical events organized after some of the new rate limits came into play, and it wasn’t without issue at the event (though we got by).

    Image thumbnails are a bit of a different story, and the backend service has been restricted to the number of thumbnail sizes that can be generated, stored and served, with some new defaults put in place.

    Current standard sizes in Wikimedia production: 20px, 40px, 60px, 120px, 250px, 330px, 500px, 960px, 1280px, 1920px, 3840px

    Common thumbnail sizes

    If you want to read some of the research and decisions that went into it, take a look at T211661#8377883 and other linked tickets.

    Anyway, these changes lead to some posts on my blog, which used now non supported thumbnail sizes to fail to load said thumbnails.

    Image

    Instead of getting the image (or any image at all), the requests is instead served with an error page from the edge, with a link for further information, which also happens to be a 429 response. Though it appears there are no headers around retrying the request.

    Error

    Use thumbnail steps listed on https://w.wiki/GHai. Please contact noc@wikimedia.org for further information (a765913)

    In this particular blog post, the image being used is https://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg/250px-Wikimedia_Hackathon_Amsterdam_2013.svg.png

    The fullsize URL would be https://upload.wikimedia.org/wikipedia/commons/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg, and the “240px x 240px” thumbnail size now actually directs you to a 250px thumbnail with a slightly different size… https://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg/250px-Wikimedia_Hackathon_Amsterdam_2013.svg.png

    The Regex

    I’m going to try and take a similar approach to my Imgur UK fix a few weeks ago and see if I can just rewrite these live as pages are served on the WordPress backend using Real-Time Find and Replace (just in case they again all need tweaking in the future…

    Rather than had crafting this, I put together a prompt for Google Gemini:

    The wikimedia foundation has changed their standard thumbnail sizes...
    Current standard sizes in Wikimedia production: 20px, 40px, 60px, 120px, 250px, 330px, 500px, 960px, 1280px, 1920px, 3840px
    These differ from what used to be used, an thus what I have used in my blog posts in the past.
    
    I have URLS such as https://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg/250px-Wikimedia_Hackathon_Amsterdam_2013.svg.png
    which needs to be rewritten to the next thumb size up...
    The fullsize URL would be https://upload.wikimedia.org/wikipedia/commons/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg, and the "240px x 240px" thumbnail size now actually directs you to a 250px thumbnail with a slightly different size... https://upload.wikimedia.org/wikipedia/commons/thumb/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg/250px-Wikimedia_Hackathon_Amsterdam_2013.svg.png
    
    And likely any thumbnail of above 3840px should just be rewritten to the FULL SIZE image.
    such as https://upload.wikimedia.org/wikipedia/commons/9/90/Wikimedia_Hackathon_Amsterdam_2013.svg
    
    I'mg going to use a wordpress plugin called Real-Time Find and Replace
    It allows a "Find" and "replace with" with regex
    
    Can we do this with a collection of regexes? :)
    Covering all possible input size pixels for historical thumbnails?

    And once getting it to output the regular expressions in a code block instead of a fancy table, I tested one of them and it seems to be spot on.

    --- 250px ---
    Find:    #(upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:12[1-9]|1[3-9][0-9]|2[0-4][0-9])(px-[^"'\s>]+)#
    Replace: ${1}250$2
    
    --- FULL SIZE (Greater than 3840px) ---
    Find:    #(upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+))/thumb/([^/]+/[^/]+/[^/]+)/(?:384[1-9]|38[5-9][0-9]|39[0-9]{2}|[4-9]\d{3}|\d{5,})px-[^"'\s>]+#
    Replace: ${1}/${2}

    But I then realized that these regexes would actually break the blog post I am writing right now, where I includes a thumbnail URL in text, rather than in an IMG tag?!

    So I asked Gemini to get a little more specific, with the following prompt:

    I realize that I only want this replacement to happen when the image is actually in img tags? or rather is going to be used as an image in the page, not just the URL used in text (as it is going to be in the article I am currently writing. Can you adjust the rgeexes for that? 

    Which lead to adjusted regexes that seem to work nicely. A sample is below, and you can find the full list at https://phabricator.wikimedia.org/P89869

    --- 250px ---
    Find:    #(\b(?:data-)?src=["'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:12[1-9]|1[3-9][0-9]|2[0-4][0-9])(px-[^"'\s>]+)#
    Replace: ${1}250$2
    
    --- FULL SIZE (Greater than 3840px) ---
    Find:    #(\b(?:data-)?src=["'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+))/thumb/([^/]+/[^/]+/[^/]+)/(?:384[1-9]|38[5-9][0-9]|39[0-9]{2}|[4-9]\d{3}|\d{5,})px-[^"'\s>]+#
    Replace: ${1}/${2}

    And then, while trying to enter the various regular expressions into the extension, I ran into an issue…..

    Image

    So, perhaps I should look at rewriting the actual content with these regular expressions, rather than rewriting them live…..

    Or, time to make my own WordPress extension for this simple task.

    WordPress plugin

    What we are aiming for here can essentially be achieved in just a small number of lines of PHP code. And most of that would be the regex itself…

    You can find the plugin code on Github that is written in this section, and I’m currently in the process of putting it on the WordPress plugin “store” for anyone else to use.

    add_filter( 'the_content', 'update_wikimedia_thumbnail_sizes', 20 );
    
    function update_wikimedia_thumbnail_sizes( $content ) {
        // Skip processing if we are in the WordPress admin dashboard
        if ( is_admin() ) {
            return $content;
        }
    
        $replacements = [
            // 20px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:[1-9]|1[0-9])(px-[^"\'\s>]+)#' => '${1}20$2',
            
            // 40px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:2[1-9]|3[0-9])(px-[^"\'\s>]+)#' => '${1}40$2',
            
            // 60px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:4[1-9]|5[0-9])(px-[^"\'\s>]+)#' => '${1}60$2',
            
            // 120px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:6[1-9]|[7-9][0-9]|10[0-9]|11[0-9])(px-[^"\'\s>]+)#' => '${1}120$2',
            
            // 250px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:12[1-9]|1[3-9][0-9]|2[0-4][0-9])(px-[^"\'\s>]+)#' => '${1}250$2',
            
            // 330px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:25[1-9]|2[6-9][0-9]|3[0-2][0-9])(px-[^"\'\s>]+)#' => '${1}330$2',
            
            // 500px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:33[1-9]|3[4-9][0-9]|4[0-9][0-9])(px-[^"\'\s>]+)#' => '${1}500$2',
            
            // 960px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:50[1-9]|5[1-9][0-9]|[6-8][0-9][0-9]|9[0-5][0-9])(px-[^"\'\s>]+)#' => '${1}960$2',
            
            // 1280px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:96[1-9]|9[7-9][0-9]|10[0-9][0-9]|11[0-9][0-9]|12[0-7][0-9])(px-[^"\'\s>]+)#' => '${1}1280$2',
            
            // 1920px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:128[1-9]|129[0-9]|1[3-8][0-9][0-9]|190[0-9]|191[0-9])(px-[^"\'\s>]+)#' => '${1}1920$2',
            
            // 3840px
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+)/thumb/[^/]+/[^/]+/[^/]+/)(?:192[1-9]|19[3-9][0-9]|2[0-9][0-9][0-9]|3[0-7][0-9][0-9]|38[0-2][0-9]|383[0-9])(px-[^"\'\s>]+)#' => '${1}3840$2',
            
            // FULL SIZE (Greater than 3840px)
            '#(\b(?:data-)?src=["\'](?:https?:)?//upload\.wikimedia\.org/wikipedia/(?:commons|[a-z]+))/thumb/([^/]+/[^/]+/[^/]+)/(?:384[1-9]|38[5-9][0-9]|39[0-9]{2}|[4-9]\d{3}|\d{5,})px-[^"\'\s>]+#' => '${1}/${2}'
        ];
    
        // Run the replacement
        return preg_replace( array_keys( $replacements ), array_values( $replacements ), $content );
    }

    Creating a little framework around such a regex set, and having it be easily selectable by users only too a few minutes.

    Image

    And the plugin also gives you the flexibility to define your own rules, so I could add my Imgur rewrite too, or implement that as an additional custom set later.

    You can also preview changes across all pages, and optionally apply them permanently to the actual content.

    Image

    Anyway, that closes my morning adventure.

    Wikimedia Hackathon Northwestern Europe 2026 recap

    Tuesday, 17 March 2026 00:00 UTC

    Last weekend, I participated in the Wikimedia Hackathon Northwestern Europe 2026, which took place in Arnhem from 13 to 14 March 2026, ca. seven weeks before the “big” Wikimedia Hackathon 2026 in Milan. Just like for previous Wikimedia Hackathon events, I want to write a bit about the experience.

    As is traditional for the Dutch (mini) hackathons, there was an informal meetup the evening before which involved a variety of beer options. I’m usually not great in this type of setting, but I had a somewhat better experience this time than at the mini hackathons, in that there were more non-Dutch speakers present; the evening further improved once I stepped outside and talked to folks there instead of inside the place. I had the opportunity to chat with several people, including meeting SomeRandomDeveloper for the first time, which was great. (On my way back to the hotel there was an unfortunate incident which we need not dwell on further.)

    The hackathon properly started the next morning, when Multichill asked me on Telegram if I wanted to play the piano; I rushed to the venue, was ushered into the auditorium, and started playing some Scott Joplin to lure the rest of the crowd into the room so the opening ceremony could start. (To my ears, the piano was fine and well tuned, though some other folks seemed to think it was a bit out of tune ^^) People seemed to enjoy this, which was nice; also, to jump a bit ahead in the chronology, this resulted in Ege reaching out to me and asking if we wanted to do a song together! He suggested The House of the Rising Sun, and I managed to get a key to the auditorium so we could practice on the second day; once I listened to the reference recording to get the tempo right (I had it in my head twice as fast as it actually is, oops), the rehearsal went well, so we performed the song to open the showcase / closing ceremony, which was great. I would love to have more hackathon venues with pianos and/or other musical instruments available!

    (Side note: one thing about this event that makes me a bit uncomfortable is how much we all relied on Telegram. If an attendee wasn’t on Telegram, they’ll have missed not only a lot of chatter, but also some announcements during the day – or rather, they’d have found out about things like lunch, the group photo, or the showcase Etherpad from their table neigbors instead. The Telegram channel that’s used for the main hackathon is at least bridged to #wikimedia-hackathon on Libera Chat (IRC), but this hackathon used a separate channel without, as far as I’m aware, any bridge. I think it probably would’ve been better to stick to the main hackathon channel and just post a warning in there a few days in advance that the channel would become quite chatty for a while and that everyone not in Arnhem should consider muting it until this event was over.)

    Jumping back to the opening, I briefly introduced myself, mentioning my nigh-perennial hackathon project (T231755, making language names translatable on translatewiki.net) and some other areas where I might be able to help out. The introduction procedure was different than it usually is at the main hackathon – instead of having people line up at the podium to introduce themselves, a microphone was handed around the audience and everyone got a turn. I thought this was actually a good way to do it (though how well it would scale to the main hackathon is another question): it removes the barrier of having to consider whether you should introduce yourself or not, instead assuming that everyone deserves an introduction. (The Etherpad trying to take notes of everyone’s introductions got a bit chaotic during this, but mostly it worked well.)

    After the showcase, I settled into what is by now my usual hackathon mode: find a table with some people I know and start working on my “background” project (in this case, making language names translatable) while keeping an eye on Telegram and waiting for collaboration opportunities either there or from people coming up to me. I won’t recount everything in detail here, but I was able to help several people with various issues, and contribute input to some ongoing discussions.

    On my “background” project, I made significant progress. I had thankfully left fairly detailed notes in a comment two years ago (at the 2024 Tallinn hackathon; in 2025 in Istanbul I had mostly worked on WDactle instead). The first part of that comment (fixing the missing language names) got fixed at the mini hackathon half a year ago (though I made two more fixes at this hackathon), so now I tackled the other part, in a series of changes that I uploaded to Gerrit: implement a temporary script to move language names to i18n files, run it, load language names from there, update rebuild.php for these changes, remove now-unused LocalNames files, and remove the temporary script again. Not all of the changes in this chain are fully ready yet, but the first few already got merged anyway; we’ll see how fast the rest of this work matures (I might work some more on it during the main hackathon in Milan). At the showcase, I briefly summarized the goal of this work and showed the patches I uploaded.

    My other larger project at this hackathon came from Ideophagous, who wanted to create some Moroccan Arabic lexemes (T420020). I showed him the Wikidata Lexeme Forms tool, and he agreed to create some Moroccan Arabic templates for it, which can be used to create lexemes either manually or in bulk. By the end of the second day, the templates were finished enough that I could deploy them to the tool, and since then, they’ve been used to create several lexemes! Ideophagous presented this project at the showcase.

    (Another side note, while we’re on the subject of the showcase – I mentioned this on Mastodon but wanted to elaborate on it a bit more. Quite a few showcase presentations mentioned they used “AI” tools, i.e. LLMs (nothing text-to-image or text-to-video as far as I remember; there was one project using what I’d call “classical” machine vision, which I’m not including in this paragraph), often in the form of “vibe coding”. And it made my stomach turn every time. These tools are built on staggering amounts of training data, stolen from all over the internet (and offline sources), including Wikimedia projects, with no attribution. In the process of hoovering up this training data, they create ungodly amounts of network traffic, making the lives of Wikimedia’s SREs miserable for the past several years, and ultimately have made it necessary to impose new rate limits which even affected us during the hackathon itself. I don’t know if the Wikimedians using these tools don’t understand this, or if they don’t care, but either way I think it fucking sucks. Rant over, at least for now; though I suspect my post about the Milan hackathon might end up including something rather similar.)

    One other project I worked on was m3api, my library for using the MediaWiki Action API from JavaScript; I’ve been working on replacing its network interface, so that the m3api-rest extension package can support more request methods, request body types, and response body types. I mostly worked on this “around” the hackathon, rather than “during” (e.g. during train rides, or late in the evening when I had nothing else to do), because this is a volunteer project and I was supposed to be at the hackathon on staff time :) but I still managed to make some good progress on m3api-rest’s support for the new network interface: on the fetch branch, it now has some support for sending additional request body types, and plans for how to implement the rest. I didn’t showcase this because it’s just ongoing work on the side, but I still wanted to mention it here.

    And that’s it! Overall, this was a great hackathon for me – I could easily reach it by train, and I met lots of lovely people, exchanged some sweets, got some useful work done, and also had some non-technical “projects” (music; at some previous hackathons, juggling ^^). I’m looking forward to the next one in Milan!

    Wikimedia Hackathon Northwestern Europe 2026

    Monday, 16 March 2026 09:14 UTC
    Image

    Historically I’m terrible at post Hackathon write ups, though a few do exist… (#hackathon posts). For the past few days I have been attending the Wikimedia Hackathon Northwestern Europe 2026 in Arnhem NL with around 70 other people. Around 42 projects were shown at the showcase, and I want to briefly look at some of those, and also document some of the other things that were going on in my vicinity.

    On the whole, this was a great hcakathon, larger than the last NL organized hackathon, a beautifull venue, good organization, good food, good people, lots of conversation, and for me at least, everything was very convenient.

    Goings on

    SavanahHQ

    Ahead of the Hackathon, Siebrand had the idea of people being able to monitor the impact of events, specifically Hackathons, and growth or retention in the various technical spaces that Wikimedia has.

    This reminded Ollie and I of a talk that we heard at OggCamp a few years earlier, of a product called SavanahHQ which is a open source project with a paid SaaS service too for giving “you the insights you need to better understand, nurture, and grow your community.”

    This sounded like it roughly aligned, and we spent some hours trying it out and importing some basic RSS and GitHub data, including making a bunch of patches on a fork with some fixes and improvements to make a free and open / non-billable product easier to use.

    This product might be neat for smaller communities with less possible data sources, but we eventually decided we might take a different approach to try and figure out some more numbers (closer to the raw data).

    In the images you can see the “activity” that was imported form various sources, along with the actors that were detected, you could then map actors in different places together, add events, and track the change in community activity and size etc around the events.

    A good experiement, and some things learnt, but I’ll be trying a different approach next…

    Wikibase query prefixes

    Firstly… What are query prefixes…

    Prefixes are shorthand aliases for long resource URIs that allow you to write more concise and readable SPARQL queries.

    Instead of http://www.wikidata.org/entity/Q12165555

    You might be able to just write… wd:Q12165555

    These are user definable as part of your query, but sometimes SPARQL services also have defaults provided.

    During the start of the Friday, a few of us spent some time looking through and thinking about the current state of wikidata and wikibase query service prefixes. (T419953 Discuss and Document how to handle SPARQL Prefixes across Wikibases) with hopefully a few decisions made, and one already written up as an actionable task (T419994 Add a `wb` SPARQL prefix-prefix automatically to refer to the current wikibase).

    Right now, many prefixes like wdt: and p: are hardcoded for Wikidata, or perhaps have been overridden to point to a local installation, which makes it hard for people using Wikibase Cloud or other private installs because their tools and autocomplete often point to the wrong data. The inconsistencies also lead to a pain point for tool developers, where prefixes can’t be used reliably (even when full URIs could be).

    We proposed a standard “wb” prefix-prefix for the local wikibase, always. This should make it much easier for users to write queries without needing to type out full, long URIs, and it will keep things consistent across all different Wikibase sites. wd: style prefixes would be preferred for wikidata URIs, and again the goal would be to make this consistent on all installs. Also then allowing an additional set of fully customizable prefixes if a particular installation would want to set them up.

    A few other observations were made:

    • The WikibaseManifest exposes something that looks like but are not prefixes. They are absent of keys relating to URI prefixes, but the keys are not themselves prefixes, just a standard way of looking up the URI part.
    • wd: as a prefix is already used to mean multiple things in this space, but really we all agreed this should be reserved for wikidata.
    • Other conversations throughout the Hackathon came back around to highlight the importance of discoverable URIs or known prefixes to enable tool developers to make tools work for all wikibases more easily.

    The work here is not done yet, the path is clear, and I hope WMDE will try to action it in the not too distant future.

    It was suggested that I try to link things like this to current Wikibase / Wikidata / WMDE goals in order to increase the likely hood they will get done. It doesn’t look like this fits within any of the Q1 plans, though looking at the plan, it’s likely included in these parts…

    • “The distributed Wikibase ecosystem is more sustainable because of […] increased feature parity, and interoperability”
    • “The ability to federate knowledge across instances has improved”
    • “We have made Wikibase self-hosting operations more accessible, robust and easier to manage”

    Integraality

    Related to the above, I spent some time talking with Jean-Fred about Integraality and specifically “T294892 integraality for Wikibases?” which again touches on things such as default query prefixes above.

    However, one of the main things that has actually been documented now (rather than just discussed) is T420096 Universal proxy authentication for any tool to edit any Wikibase which could make life easier? but also could be a terrible idea… (Think magnus tools widar, but for all tools and all wikibases?)…

    N tools interacting with M independent Wikibases, resulting in N x M manual configurations, OR each wikibase needing to deploy its own version of each tool, leading to less control for tool authors, and more work for wikibase creators…

    There would be a lot of downsides to something like this… Loss of Auditability in terms of the “tool” or consumers that has actually caused an action? The proxy would be a large and growing pile of security realted data, and also a single point of failure.

    Probably something like https://www.rfc-editor.org/rfc/rfc7591 OAuth 2.0 Dynamic Client Registration Protocol would be a better idea in this space. Independent Wikibases can be configured to trust a central Identity Provider (IdP). This could be Wikimedia’s CentralAuth, GitHub, ORCID or some other known provider. A user goes to a new Wikibase -> clicks “Log In” -> is redirected to the central IdP -> logs in -> is redirected back. The Wikibase automatically provisions a local user account mapped to that global identity. The user never creates a new password. Which is essentially T383142 Enable Wikimedia login on Wikibase.cloud sites, but for all Wikibases. This would likely make use of PluggableAuth and perhaps WSOAuth which can be configured to authenticate users with Wikimedia login.

    Wikibase community telegram groups

    There were a fair few discussions going on about wikibase at the hackathon, and at some point during all of those the vast array of telegram groups came up.

    I believe one of the oldest groups is the wikibase community telegram group currently with 371 members. There are then separate single channels for things such as wikibase.cloud 226 members, then including broadcast channels and such, and finally the newest channel on the block wikibase suite, with 47 members.

    The wikibase suite channel has some nice structure to it with sub channels for various topics such as configuration, however, the wikibase community channel is much larger and has much wider reach. Personally, I still think there’s confusion around why this “wikibase suite” term has been segregated from just wikibase. And generally we felt like the community would benefit from a single channel for discussing wikibase installations, with the structure of the sweet channel but the involvement and spread of the main community channel.

    It looks like this is primarily something for wmde to consider, as they have most of the owner and admin rights across both of these channels.

    Developer activity and retention

    After deciding that SavanahHQ was probably not the tool I wanted to use for the job, we just started scraping some data, primarily from Phabriator via https://wikimedia.biterg.io/, and also consructing a git log of all Wikimedia related repositories that I could find across Gerrit, Gitlab and Github… The initial scrape, and clone took some time, and eventually we started to get towards having data from each source that included:

    • Source: Where has the data / activity event come from [e.g. phabricator]
    • Type: What was the event type [e.g. task-create]
    • Actor: Who or what triggered the event, such as a username [e.g. Addshore]
    • Identity: The unique identity, which when combined with the soruce and type could be used for deduplication, and looking up the thing again [e.g. T12345]
    • Timestamp: The time the event occoured

    So for the phabricator scrape of bitergia, the entry might look something like {"source":"phabricator","timestamp":"2020-04-28T10:20:25+00:00","type":"task/create","actor":"Addshore","identity":"T251244"}

    And for a gitlab repository, perhaps something like {"source":"git/gitlab/addshore/backstage","timestamp":"2021-10-18T09:56:38-04:00","type":"commit","actor":{"name":"Addshore","email":"addshore@example.org"},"identity":"c3699d5bd3141bc8dbe688419169f352d1502c9f","metadata":{"subject":"foo bar test"}}

    I didn’t manage to collect all of this data during the hackathon, but spat out some graphs to present during the showcase anyway with an indication of the sort of insights this might be able to show you…

    So NOTE: the below graphs are NOT COMPLETE so really you should totally ignore them until I get to look at them for another round..

    I think this is certainly an area worth continuing to explore, but in order to get to any meaningfull point, more raw data is needed and more refining needs to happen:

    • More sources (try not to be scared):
      • RSS / blogs
      • Chat logs and activity (IRC, telegram)
      • On wiki edits of JS, CSS, Templates and Modules and the relevant talk pages / docs
      • A more complete list of git repositories
      • More phabricator activity (comments would likely be the next most relevant)
      • GitLab MRs & Issues
      • Gerrit code review
      • SAL (Server admin log)
      • Mailing list posts

    And I am sure people could come up with more.

    Already there are additional things worth considering:

    • Ignoring some repos, such as “kubernetes” which is forked into the WMF Gerrit
    • Flagging bots and automation actors throughout the above
    • Some attempt at deduplicating / connecting the same actors across various places where possible
    • Automation, automation, automation…

    Noteworthy in this space would be https://techcontibs.toolforge.org which is a very cool tool for visualizing your individual technical contributions across multiple platforms.

    https://wikimedia.biterg.io which contains high level trend activity data for some platforms and to some levels, without too much further analysis.

    And https://strategy.wikimedia.org/wiki/Editor_Trends_Study/Results which exists for Wikimedia editor trends, but not for the techncial community.

    I hope to be looking back at this soon…

    Showcase

    You can find the full showcase listing on wiki, but here are a few bits that I will particularly want to remember.

    Telegram commons uploader

    Siebrand and Maarten managed to whip together a Wikimedian Commons telegram uploader bot (which you can already use).

    This really lowers the barrier to entry for image uploads if you are out and about. You no longer have to use a dedicated app such as the commons android app, or the web based browser upload flows, you can just send your image (as a file) to the bot, answer some questions and it will appear on Commons!

    You can read more about it on wiki, find the bot on Telegram, and see all images that have been uploaded by the bot so far in the dedicated category.

    Image

    Wikimedia Developer Starter Kit

    Very much a starting page at https://meta.wikimedia.org/wiki/User:Eugene233/NewDevKit, but also a lovely idea.

    Maybe at some point we would have a single point of reference we all agree on and like to link newcommers to accross the board.

    Image

    Wiki as Git!

    Ever wanted to know who to blame for a specific part of an article? Now you can.

    This app takes your request, and drags the history into git, hosted on Github for you to visiualize and run a blame on.

    https://wiki-as-git.netlify.app/en.wikipedia.org/Brazil%20at%20the%202026%20Winter%20Paralympics

    wiki-as-git has existed for many years, the concept of seeing the Git history just by browsing a URL has been hacked at this hackathon.

    Image

    Solving the hackathon logo puzzle 😎

    The logo included at the top of this post, and throughout the other pages relating to the Hackathon had a secret code in it. And one evening a bunch of people got together and figured it out (spoiler: it was a rick roll).

    The most complete working can be found at https://gist.github.com/Krinkle/ace3f2023a250ff387d432bdb5c22c83, with a link right at the end showing you the result (https://people.wikimedia.org/~krinkle/wmhack2026-puzzle/13-workspace.html).

    And if you want an interactive tool to try and figure it out yourself (with a fair bit of help already in there), see https://simon04.github.io/Wikimedia-Hackathon-Northwestern-Europe-2026/

    Image

    weeklyOSM 816

    Sunday, 15 March 2026 18:40 UTC

     

    05/03/2026-11/03/2026

    lead picture

    [1] OpenSeaMap-vector | © k-yle | map data © OpenStreetMap Contributors.

    Mapping

    • AndreaDp27 has proposed a tagging scheme to map officially designated civil protection areas. Voting opened on 9 March 2026 and will close on 23 March 2026.

    Community

    • At the recent State of the Map Ben Hur Pintor delivered a presentation titled ‘Awesome (OSM) Games’, highlighting a range of games that make use of OpenStreetMap data, though they are not necessarily designed to contribute back to the mapping platform.
    • Derlamaer has proposed a new OpenStreetMap tag for detector-operated pedestrian signals (detector_operated=*).
    • Mikel Maron vibe coded an OpenStreetMap – Overture Maps conflation tool.
    • Natfoot has proposed the railway=trail tag to mark bike routes along rail trails.
    • Rene78 noticed that it is possible to politely ask ChatGPT to generate a proper opening_hours tag.
    • Simon Poole outlined several challenges faced by the Swiss OpenStreetMap community while they attempted to import municipal boundary data from Switzerland’s federal GIS department into OpenStreetMap.
    • In response to Overture Maps‘ recent attempt to make their Global Entity Reference System (GERS) an Open Geospatial Consortium standard, Simon Poole argued that OSM object IDs also suffice as an alternative to GERS. Meanwhile, in a separate diary entry, rphyrin shared the same sentiment.
    • Simon Poole reported that the Vespucci app might run into problems on older Android devices due to root certificate expiration, because older versions of Android only update certificates during full system updates. He suggested that users manually install the certificate as a temporary workaround for the issue, since Vespucci is currently not prepackaged with the full set of certificates, all while acknowledging that this method could increase user friction to an unacceptable level.
    • Pascal Neis analysed the recently introduced company and location fields in OpenStreetMap user profiles, concluding that his HDYC profiles provide more reliable indicators because they are derived directly from collected and analysed contribution data, rather than from self-declared free-text profile information.

    Local chapter news

    • Jochen Topf reported ImageImage that FOSSGIS e.V. has received funding from the German Foundation for Engagement and Volunteering to carry out an OpenStreetMap training programme.

    Maps

    Open Data

    • François Lacombe has compared Image several analytical frameworks used to examine data-sharing practices in France.

    Software

    • HeiGIT reported that they have developed an open dataset platform called OpenAccessLens that visualises physical access to education and healthcare services worldwide. It shows how far people are from the nearest schools and hospitals, expressed in travel time or distance.
    • The team at the OSM Website shared a recap of their latest work, including progress with the transition to MapLibre and a number of fixes to user experience.

    Programming

    • Altsybeyev explained Image how MapMagic developed its own topographic maps by combining OSM data and the Mapterhorn digital elevation model.

    Releases

    • CoMaps has released version 2026.03.09, adding support for Type 1 combo EV chargers, displaying amounts charged for use of facilities as well as the population of cities. Further, it added and improved support for POI types such as entertainment attractions and water shops.
    • Bastian Greshake Tzovaras has published version 0.4.0 of the CoMaps map-distributor Python CLI tool. It improves the management of downloads and partial downloads.
    • Christoph Hormann announced that the OpenStreetMap Carto maintainers have prepared a new major release of the OpenStreetMap Carto stylesheet.

    Did you know that …

    • … recently QGIS had been using the OpenStreetMap tile server more than OpenStreetMap.org itself? Some QGIS plugins could be used as OSM tile bulk downloaders and might be responsible for this issue, but a safeguard has recently been added to prevent such things.

    OSM in the media

    • Falk Steiner, of Heise, argued ImageImage that OpenStreetMap bears partial blame for the recent cable bridge fire that caused a blackout across parts of Southwest Berlin, citing the platform’s open data practices as having contributed to the exposure of critical infrastructure location data that could be used for sabotage purposes.
    • Emhraim, of GNU/Linux.ch, has showcased ImageImage some mobile apps for editing OpenStreetMap: CoMaps and StreetComplete.
    • In a recent interview in Basta!, a French independent online news outlet, Jérôme Hergueux, a researcher at the Centre national de la recherche scientifique specialising in social networks, claimed ImageImage that the OpenStreetMap project is ‘now largely dormant’ because another for-profit navigation app, which also provides map services, discourages new contributors from joining the community.

    Other “geo” things

    • Researchers from the Institute of Historical Research, University of London have built ‘Layers of London’, a free historic map of London where users can contribute stories, memories, and histories to create a social history resource about their locality.
    • Historian Ivan Malara accidentally discovered Galileo Galilei’s handwritten annotation in a copy of Ptolemy’s The Almagest in Italy’s National Central Library of Florence.
    • Isaac Corley and Caleb Robinson have started a blog GeoSpatial ML, which offers articles on machine learning, remote sensing, and other topics. You can follow them using their RSS channel or on their Substack. Their latest post was titled ‘Training a Water Segmentation Model with TorchGeo’.
    • Pierre Sauche commented ImageImage on the Le Rize ImageImage mémoires, cultures, échanges, which includes an interactive Image web map where you click to view historic photos and other information. The project has the support of IGN, the French geospatial agency.
    • The University of Zaragoza led ImageImage the development of an online cartographic viewer Image, as part of the FirePaths Project for forest fire risk analysis. It uses free and open software and OGC standards, including OpenStreetMap as a base map.
    • Katharina Seeger and Philip Minderhoud warn that the sea level is much higher than assumed in most coastal hazard assessments. In an article published in Nature, they argue that nearly 99% of evaluated assessment reports are affected by an incorrect methodology, which assumes a mean sea level based on global geoid models, not using precise techniques (such as using airborne Lidar) and not considering the coastal elevation.

    Upcoming Events

    Country Where Venue What When
    March Missing Maps Mapathon Image 2026-03-12 – 2026-03-13
    flag Magrathea Laboratories Chaos Computer Club Fulda OSM-Tools: Wenn die Welt zur Spielwiese wird Image 2026-03-13
    flag Leuven Romaanse Poort Camera’s in kaart brengen Image 2026-03-14
    flag Online A Mapathon to enrich participatory mapping of short supply chains around the Tokikoa label in the Basque Country Image 2026-03-17
    flag Zaragoza Online Mappy Hour OSM España Image 2026-03-17
    Missing Maps London: (Online) Mid-Month Mapathon [eng] Image 2026-03-17
    flag Lyon Tubà Réunion du groupe local de Lyon Image 2026-03-17
    flag Bonn Dotty’s 198. OSM-Stammtisch Bonn Image 2026-03-17
    flag Online Lüneburger Mappertreffen (online) Image 2026-03-17
    flag MJC de Vienne Rencontre des contributeurs de Vienne (38) Image 2026-03-18
    Online Mapathon – Ärzte ohne Grenzen Image 2026-03-18
    flag Stainach-Pürgg Online 20. Österreichischer OSM-Stammtisch (online) Image 2026-03-18
    flag Gent Tramzwart, KASK Camera’s in kaart brengen Image 2026-03-19
    flag Heidelberg DEZERNAT#16 Rhein-Neckar OSM Treffen // Intro iD-Editor Image 2026-03-19
    OSMF Engineering Working Group meeting Image 2026-03-20
    flag Olomouc Přírodovědecká fakulta Univerzity Palackého Missing Maps Day Olomouc 2026 Image 2026-03-21
    flag Domplatz Fulda Frühlingsmapping 2026 Image 2026-03-22
    Missing Maps : Mapathon en ligne – CartONG [fr] Image 2026-03-23
    flag Stadtgebiet Bremen Online und im Hackerspace Bremen Bremer Mappertreffen Image 2026-03-23
    flag Pôle Numérique Brest Iroise Rencontre OpenStreetMap et Territoires Image 2026-03-24
    flag Göttingen Uni Göttingen FOSSGIS-Konferenz 2026 Image 2026-03-24 – 2026-03-27
    flag Derby The Brunswick, Railway Terrace, Derby East Midlands pub meet-up Image 2026-03-24
    flag Düsseldorf Online bei https://meet.jit.si/OSM-DUS-2026 Düsseldorfer OpenStreetMap-Treffen (online) Image 2026-03-27
    flag Chemnitz Neues Hörsaalgebäude, TU Chemnitz Chemnitzer Linux-Tage 2026 Image 2026-03-28 – 2026-03-29
    flag Göttingen Uni Göttingen, Fakultät für Geowissenschaften FOSSGIS 2026 – OSM-Samstag Image 2026-03-28
    Local Chapters & Communities Congress 2026 Image 2026-03-28
    flag Vélo Utile rencontre OSM Image 2026-03-28
    flag Mira-Bhayander DBT Café, Mira Road OSM Mumbai Mapping Party No.8 (Western Line – North) Image 2026-03-28
    flag Hannover Kuriosum OSM-Stammtisch Hannover Image 2026-03-30

    Note:
    If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

    This weeklyOSM was produced by MarcoR, MatthiasMatthias, PierZen, Raquel IVIDES DATA, Strubbl, Andrew Davidson, barefootstache, derFred, izen57, s8321414.
    We welcome link suggestions for the next issue via this form and look forward to your contributions.

    We are witnessing a resurgence and evolution of Command Line Interfaces (CLIs), accelerated by AI agents. Text-based, scriptable CLI tools work very well with LLM-based workflows. Accessing Wikipedia articles during an agent session is common. Usually, a webfetch call is used to get the HTML for a page from a URL like https://en.wikipedia.org/wiki/2026_Winter_Olympics.

    That works, and LLMs are smart enough to read HTML. But there is a cost: HTML is for rendering, so the model must ignore a lot of non-content markup to get to the useful text. i That increases token usage and adds context noise. Can we improve this?