Nasscom has submitted its response to the DPIIT Committee's Working Paper I on Copyright and Generative AI (attached). Our position is clear: we support whichever approach is evidence based, demonstrably serves both stakeholders, while being practically implementable with acceptable costs and obligations.
The stakes behind that position are significant. India has a bold AI ambition that this copyright debate could either support or undermine, depending on how policymakers proceed.
The DPIIT Committee's Working Paper presents two paths: majority view as mandatory licensing with centralised royalty distribution, or dissenting view in the form of text and data mining exception with opt-out rights. Both aim to balance creator compensation with AI innovation. Both are largely untested at the scale required for generative AI.
Before Choosing a Solution, Understand the Problem
A central concern driving the majority proposal is that creators currently receive little or no compensation for AI training use. But what does the evidence actually show for India?
More importantly, what does India's existing licensing landscape actually look like? What commercial agreements between developers and rightsholders already exist? Are they functioning? Do they reach individual creators and smaller rightsholders, or only well-resourced institutions?
The US Copyright Office examined this question carefully and found a developing market for training data licensing. It recommended allowing that market to mature before intervening. India has not yet conducted this baseline assessment.
Legislating without this evidence risks displacing commercial arrangements that are quietly working.
The Mandatory Licensing Model: Promising in Principle, Complex in Practice
The proposal promises certainty. Creators receive statutory royalties as a fixed percentage of AI developer revenue. A Copyright Royalty Collection and Administration Trust handles distribution.
But royalty rates would be set through stakeholder consultation without market signals. What prevents rates from being systematically too high or too low? How would the system account for value differences between specialised and mass-produced content?
What governance structure would ensure CRCAT operates independently and transparently?
Distribution raises equally difficult questions. CRCAT would decide its methodology through simple majority vote of member organisations. Without a prescribed approach, how would billions of works be valued? By content contribution? Usage intensity? Revenue attribution? Can committee processes resolve questions of this economic complexity?
Then there is the infrastructure. Tracing ownership across billions of data points to millions of rightsholders requires systems that do not currently exist in India. Digital watermarking, blockchain, and content fingerprinting are proposed. But are these mature enough at the required scale? Could infrastructure costs exceed the royalties being distributed?
The Opt-Out Model: Flexible but Not Without Gaps
The alternative is more flexible. Creators retain control. Administrative overhead is lower. But EU research shows opt-out mechanisms are predominantly used by well-resourced institutions, not individual creators. India has a vast and diverse creator community. Would they realistically be able to navigate opt-out systems, or would the protection exist only on paper?
Most importantly, if TDM exception is introduced, what enforcement mechanisms would ensure AI developers honour opt-out obligations?
Both models share a problem: poor metadata. Much online content lacks the identifiers needed to establish ownership. Orphaned works are common. Even available metadata often points to platforms and aggregators rather than original creators.
For mandatory licensing, this is a systemic barrier. You cannot distribute royalties to rightsholders you cannot identify. For opt-out mechanisms, it prevents legitimate creators from exercising rights they are theoretically entitled to.
Transparency: Principle Versus Practice
Both proposals include transparency obligations. The mandatory model requires disclosure forms to facilitate royalty distribution. The opt-out model uses transparency to demonstrate compliance.
But what degree of disclosure provides meaningful information to rightsholders without revealing commercially sensitive training processes? Can current technical capabilities track billions of training inputs accurately? How would requirements adapt as AI development practices evolve?
These questions determine whether policy works in practice.
What the UK Is Doing Differently
After receiving over 11,500 consultation responses, the UK deployed 80 policy experts to analyse stakeholder views.
Following closure of the consultation, the government committed that every response to it should be read and analysed by a human without the use of AI or other automated tools.
Then they mandated economic impact assessment and established four technical working groups drawing domain-specific expertise from music, film, tech, and academia, to examine: technical measures for controlling access, effects on AI developers' data access, disclosure requirements, and licensing frameworks.
Results are due to be published in March 2026. Only then will the UK legislate.
The approach recognises a truth: complex policy requires evidence, not theoretical preferences.
India's Path Forward
DPIIT should publish all consultation responses, appropriately redacted for commercial confidentiality. Transparency builds legitimacy and reveals implementation concerns across stakeholder categories.
Following MeitY's IndiaAI model (seven expert groups) and MCA's Insolvency and Bankruptcy Code approach (four working groups), DPIIT could constitute domain-specific committees. These should deliver within six months: technical feasibility assessments, cost-benefit analyses, implementation roadmaps, and recommendations on whether regulatory intervention is necessary.
The choice between centralised administration and market-enabling rules should be informed by empirical evidence. Not assumptions or perceptions.
India's AI ambitions are real and worth protecting. So are the interests of its creators. Getting this framework right serves both. Getting it wrong serves neither. The question is not which option to choose. The question is whether we know enough yet to choose wisely.
For any suggestions or queries, please write to, please write to Sudipto Banerjee at sudipto@nasscom.in with a copy to policy@nasscom.in.
Generative AI copyright TDM exception mandatory licensing opt-out
Download Attachment

