Imagination ThΓ©o Bondolfi, formalized with AI assistance.Learn more · Contribute
Imagination ThΓ©o Bondolfi, formalized with AI assistance.Learn more · Contribute

WikiDeal operates under a mandatory AI transparency rule: any content, proposal, code, or analysis submitted to WikiDeal β€” whether to the wiki, to an Open Call competition, or to a governance vote β€” must disclose whether AI tools were used in its creation, and if so, which tools and what prompts were provided.

The Prompt Disclosure Requirement

Transparency about AI use is not merely an ethical preference at WikiDeal β€” it is a structural requirement embedded in the platform's content policy. Every wiki article that was assisted by AI carries the yellow disclaimer banner visible at the top of this page. Every Open Call submission must include a section titled "AI Contribution Disclosure" that specifies: the name and version of any AI system used; the exact prompts or instructions provided; the portions of the output that were accepted, modified, or rejected; and the human contributor's own intellectual contribution to the final result.

This requirement serves multiple purposes. It reduces the tendency to overstate AI contributions (claiming AI-generated insight as original thought) and understate them (hiding AI assistance to appear more impressive). It creates an auditable trail that allows the community to assess the intellectual provenance of proposals and articles. And it builds a shared understanding of how AI is actually being used in the project β€” which becomes increasingly important as AI tools become more capable.

Audit Trail and Verification

WikiDeal maintains an AI audit trail: a structured log of AI-assisted contributions, indexed by contributor, tool, date, and content type. This log is publicly accessible. While individual contributors' identities may be pseudonymous (WikiDeal respects privacy), the AI contribution record is open. This allows researchers studying AI use in cooperative governance to access real data, and allows the community to audit compliance with the disclosure requirement.

Verification is community-driven. Any user can flag a contribution as potentially non-compliant with the AI disclosure rule, triggering a review process managed by the wiki's moderating team. Verified violations result in a request for retroactive disclosure; persistent non-disclosure can result in a contribution being marked as non-compliant and excluded from formal proceedings (though it remains visible in history).

Why Transparency Reduces Conflict

ThΓ©o Bondolfi has noted that mandatory AI disclosure has a secondary benefit: it reduces the "my idea is better" dynamic that can paralyse open communities. When every proposal comes with a clear record of its intellectual ingredients β€” human insight, AI drafting, empirical data sources β€” it becomes easier to evaluate proposals on their merits rather than on their authors' prestige. The proposal that synthesises AI assistance with genuine human experience and wisdom stands on its own, regardless of who submitted it.