AI tools are increasingly used by contributors to read code, explore codebases, and generate changes. In many open source projects, this is already changing how issues are opened and how pull requests are submitted. While AI can help people get started, it also creates new challenges for maintainers.
This talk is based on real discussions and concrete examples from the open source community. Maintainers in projects such as Django, Python, GNOME, and OCaml report similar patterns: large or unnecessary AI-generated changes, missing design discussion, references to non-existent APIs, and contributions that are technically correct but hard to review and maintain. In many cases, work is moved from contributors to already time-limited maintainers.
The focus of this talk is not on banning or promoting AI. The shared concern across these communities is responsibility. Problems appear when AI replaces understanding, testing, and human accountability, breaking the social processes that open source depends on.
The talk also looks at how projects are responding. Some add documentation, disclosure rules, or review guidelines. Others start wider discussions about governance, sustainability, and legal risk. These responses show that the issue goes beyond individual pull requests.
Instead of giving simple answers, this talk shares the real questions the community is asking today, and helps contributors and maintainers think more clearly about the future role of AI in Django and open source projects.