NEW DELHI – As India prepares to host the AI Impact Summit 2026, a new policy report raised alarm over the weaponisation of artificial intelligence for hate speech, state surveillance, and systemic discrimination.

The AI Impact Summit is an international summit on artificial intelligence scheduled to be held in New Delhi from 16 to 20 February 2026. The report was released jointly by Washington-based think tank the Center for the Study of Organized Hate (CSOH) and India-based digital rights advocacy group the Internet Freedom Foundation (IFF) on Thursday.

The report has raised serious concerns about the country’s AI governance framework, warning that rapid AI expansion without enforceable safeguards could deepen discrimination, surveillance, and democratic erosion.

Titled AI Governance at the Edge of Democratic Backsliding, the report critically examined India’s approach to artificial intelligence regulation in the run-up to the Summit, which aims to position India as a Global South leader in AI governance.

Summit Rhetoric vs. Ground Reality

India’s official vision for the Summit centers on “Democratizing AI and Bridging the AI Divide” through the pillars of “People, Planet and Progress.” However, the report argued that the government’s regulatory approach favors voluntary compliance and industry-led standards over binding legal accountability.

“India does not have a comprehensive, specialized AI regulatory framework comparable to the EU AI Act,” the report notes, adding that the recently released India AI Governance Guidelines explicitly state that “a separate law to regulate AI is not needed given the current assessment of risks.”

The authors warn that this “hands-off approach” may leave marginalized communities exposed to algorithmic harms without meaningful recourse.

The report emphasizes that while the Guidelines refer to fairness and accountability, they “fall short of specifying any concrete recommendations” for liability or independent oversight, particularly in high-risk public sector deployments.

Weaponization of Generative AI

A major focus of the report is the rise of AI-enabled targeted hate, especially against religious minorities.

The study documented how generative AI tools are being used to produce photorealistic images and videos that reinforce harmful stereotypes about Muslim communities. It highlights AI-generated content depicting Muslim men as violent or criminal and Muslim women in sexualized or dehumanizing portrayals.

“Incidents of public tragedy… are exploited to circulate viral AI-generated content that demonizes and vilifies the Muslim community,” the report stated.

It also pointed to instances where AI-generated political propaganda has been shared by ruling party units, including videos targeting opposition leaders in states heading toward elections in 2026.

According to the authors, the lack of enforceable safeguards risks normalizing such digital hate campaigns, especially in a politically polarized environment.

Surveillance and Welfare Risks

The report also flagged concerns about integrating AI into India’s Digital Public Infrastructure, including Aadhaar-based authentication systems.

While government documents promote a “techno-legal approach” to governance — embedding safeguards within AI systems — the report argued that without statutory obligations and independent oversight, this approach remains largely aspirational.

In welfare delivery systems, previous studies cited in the report showed that mandatory biometric authentication has led to exclusion of vulnerable workers from employment programs. The authors cautioned that AI integration could deepen such exclusion.

“Reliance on existing regulation without imposing enforceable accountability obligations on AI systems becomes effectively meaningless in practice,” the report argued.

Environmental and Infrastructure Concerns

The report further questioned India’s push to expand AI infrastructure, including large data centers, without robust environmental impact assessments.

Data centers require vast amounts of electricity and water, raising concerns in a country already facing water stress and air pollution challenges. The authors called for greater transparency from private sector operators on energy consumption, water usage, and sustainability planning.

A “truly democratic vision,” the report says, must “actively engage with local communities and environmental experts” before infrastructure expansion.

Synthetic Content Regulation Raises Free Speech Concerns

The report also analyzed recent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules notified on February 10, 2026, just days before the Summit.

While the amendments aim to address deepfakes and unlawful synthetic content, civil society groups have raised concerns over vague definitions, potential over-censorship, and privacy risks stemming from mandatory labeling and identity disclosure requirements.

The report warned that the three-hour takedown timeline for unlawful synthetic content may incentivize over-removal of content and could undermine freedom of expression.

It also noted that sharing user identity information with authorities without prior judicial oversight poses significant privacy concerns.

Innovation vs. Accountability

A central tension identified in the report is India’s prioritization of “responsible innovation” over precautionary regulation.

The AI Governance Guidelines stated that “responsible innovation should be prioritised over cautionary restraint.” However, the authors argued that positioning regulation as a barrier to innovation is a contested view and that minimal transparency and safety guardrails are essential, particularly in high-risk applications.

The report also questioned whether India’s AI sovereignty ambitions — including the India AI Mission — can truly democratize access while relying heavily on foreign investments and Big Tech infrastructure.

Call for Rights-Based Framework

In its recommendations, the report called for:

Statutory transparency and accountability obligations for AI developers and deployers
Independent oversight mechanisms for public-sector AI deployments
Clear liability frameworks across the AI value chain
Stronger environmental safeguards for AI infrastructure
Protections for freedom of expression and privacy in synthetic content regulation
As global attention turns to New Delhi for the AI Impact Summit 2026, the report urged policymakers to align rhetoric about democratizing AI with enforceable safeguards that protect minority rights, civil liberties, and democratic institutions.

“The summits have emerged as important sites for multilateral deliberation,” the report concludes, “but without institutional grounding and enforceable accountability, meaningful action remains elusive.”