What Schools are Not Telling You About AI in Classrooms

What Schools are Not Telling You About AI in Classrooms

Analytics India Magazine (Merin Susan John)

The days when blackboards and books defined Indian classrooms are passé. Touchscreens, apps and chatbots now mediate how millions of children learn. Global tech giants have taken notice as well—Google’s education suite, Microsoft’s Copilot for students and OpenAI-powered apps are already being piloted by schools across India.
It may sound like progress. But beneath the buzz lies a bigger question: are Indian K-12 students safe in this new AI-driven learning ecosystem?

A Revolution Without Guardrails 

“AI is helping me do my homework and understand subjects better. I can ask for any help from AI,” said Aksa Cyril, a 10th-grade student from Kerala. 

AI-powered tools today handle everything from tutoring and attendance to behaviour.  Adaptive platforms can now detect a child’s performance, predict their weak spots and customise the next quiz instantly. But what happens when these same systems begin interpreting a child’s emotional behaviour or decision-making patterns?

In India, debates on digital safety have historically focused on screen time; not AI safety. As schools integrate generative models into everyday learning, the gap between risks and safeguards is widening further. 

This gap is especially stark between India’s urban, private schools and rural or government-run schools. While premium institutions are investing in AI labs, licensed tools and digital safety frameworks, many government schools still lack reliable electricity or even computer science as a subject, let alone regulated AI platforms. 

Global reports show that AI chatbots can generate emotionally harmful or inappropriate responses for children, including guidance on self-harm or unethical behaviour. These systems were never designed with children in mind. Without strong guardrails, classrooms risk turning into testing grounds for unproven AI tools.

Push for AI Literacy 

India is aggressively preparing students for an AI-first world. The National Education Policy 2020 (NEP) identified AI as a “core skill for the future”. Following this, the Central Board of Secondary Education (CBSE) introduced AI as a subject for classes 8-10 and plans to extend it to class 3.

Karnataka, Telangana and Maharashtra are already piloting AI literacy programmes. The Karnataka school education and literacy department has announced AI-integrated lesson plans as part of its ‘Education 4.0’ vision, while NCERT and CBSE have been collaborating with tech firms to co-create modules that familiarise students with AI ethics, data use and computational thinking.

But these ambitious steps assume something that’s not yet true: that schools, teachers and systems are prepared.

“AI courses should actually be done by bureaucrats and educator-facilitators first, and then implemented for kids,” Sagar Vishnoi, co-founder and director of Future Shift Labs, said. 

Teachers, the first line of defence against AI misuse, often lack training in generative AI or its ethical challenges. 

A teacher from National Public School (NPS) Bengaluru said they do not use any AI tools that are not explicitly approved by the school. However, in their personal capacity, they rely on large language models (LLMs) such as ChatGPT, Gemini 3 and Perplexity, and platforms like LARA from Teachy to prepare lessons and simplify classroom planning. 

Who Uses Your Child’s Data? 

Behind every tap or query to an AI tool lies a trail of data—performance, attention, behaviour, even emotional cues. This data is extremely valuable to schools and technology providers.

The recently notified Digital Personal Data Protection (DPDP) Rules, 2025, attempt to address this. The rules make it mandatory for platforms to seek verifiable parental consent before collecting or processing children’s data, prohibit behavioural profiling and targeted advertising aimed at minors, and impose fines of up to ₹200 crore for violations.

Yet, enforcement remains weak. An investigation led by Human Rights Watch in 2023 found that the government-run DIKSHA app had accidentally exposed personal data of millions of students and teachers. 

Globally, breaches in education systems are becoming routine. A Times of India-cited survey found that 70% of parents feel uncomfortable about AI systems accessing their children’s data.

“AI stops children from thinking on their own. Learning should be a natural process, not relying on tools to fill in answers,” Subbu, parent of a 15-year-old student, said.

UNESCO’s global guidelines are clear: protecting minors must be a prerequisite, not an afterthought. 

When Chatbots Cross the Line

Watchdogs worldwide have flagged the psychological risks of AI companions for children. A report by K-12 Dive, an industry publication that provides in-depth news, reports and insights on the K-12 education sector, found young users form emotional attachments to chatbots, some of which produce manipulative responses. 

Closer home, educators worry that unsupervised access may normalise cheating, amplify misinformation and dull critical thinking. 

Generative AI also struggles in mental-health contexts. 

A 2025 study by Frontiers in Psychiatry found that LLMs often failed to recognise signs of ideation or provided oversimplified, sometimes harmful advice. 

Why Don’t LLMs Have a ‘Kids Mode’ Yet?

For years, children using the internet relied on kid-safe filters—YouTube Kids, safe-search modes, curated playlists and content restrictions specifically designed for minors. Parents and teachers saw these as essential digital guardrails. There is no such parallel in the world of AI.

Major LLMs, including those increasingly used for homework, doubt-clearing or school projects, do not have dedicated ‘kids mode’ environments with age-appropriate boundaries. Instead, students are interacting with tools built for adults, moderated for adults and trained on datasets never designed for children’s cognitive or emotional needs.

LLMs don’t just show content as YouTube does; they converse, persuade and influence. A chatbot that misunderstands a child’s query, mimics harmful behaviour or produces inappropriate reasoning can do more damage than a single harmful video.

How Some EdTech Players are Responding

PowerSchool, a US-based global K-12 platform, offers an instructive example. Its AI assistant, PowerBuddy, uses strict filters, real-time moderation and restricted access. It also has a custom content-safety model built with Amazon SageMaker to block unsafe prompts while still allowing legitimate educational queries and independent security testing for vulnerabilities like prompt injections. The company is also part of the EDSAFE AI Industry Council to promote industry-wide standards.  

PowerSchool’s example highlights a simple truth: in education, safety cannot be outsourced to algorithms. It must be built into them.

India’s Reality: Promise, Policy and Preparation

India wants AI to become a classroom staple—through AI for All, Education 4.0, Beyond Bengaluru and state-led pilot projects. But digital readiness is uneven. 

Some schools lack basic internet access. Others rely on free AI apps without understanding where student data goes. This is creating a new kind of divide between those with safe AI and those without.

In a first-of-its-kind move in India, Tamil Nadu had announced that government school students will be taught AI-related and ChatGPT-like technologies as part of the state’s Technology Education and Learning Support (TEALS) programme. The initiative, implemented in collaboration with Microsoft, began as a pilot in 14 schools and is now being expanded. Education minister Anbil Mahesh Poyyamozhi had also confirmed that the next phase will involve training teachers across 100 government schools, signalling the state’s push to mainstream AI learning even in public education.

While premium schools based in the metros are racing ahead with curated AI ecosystems, vendor partnerships and paid tools, rural and government schools are struggling to navigate a chaotic market of unregulated apps and zero-support solutions. The ‘AI readiness gap’ risks becoming the next major inequity in Indian education.

UNESCO also warns that ethical use of AI, transparency and accountability must be prioritised alongside innovation. AI literacy must expand beyond technical skills to include awareness about ethics, consent, bias and manipulation detection—the digital equivalent of a moral science class. 

Parents need structured awareness programmes, teachers need certification in AI ethics and schools need localised guidelines for safe chatbot use. 

The National Council for Teacher Education (NCTE) has begun exploring AI-based pedagogy modules, but how far it scores in implementation remains to be seen. 

Moreover, in response to the question posed to the education ministry, the Lok Sabha itself admits to deep gaps. The government stated that the SOAR initiative was launched to ensure “equitable access to AI education across geographies”, acknowledging that rural and tribal schools lag significantly behind urban peers. Even today, only model institutions like Navodaya Vidyalaya have reliable digital infrastructure, while most government schools struggle with devices, connectivity and trained teachers. Meanwhile, NCERT is still developing AI textbooks for higher grades, meaning structured AI education remains far from universal.

The post What Schools are Not Telling You About AI in Classrooms appeared first on Analytics India Magazine.

Generated by RSStT. The copyright belongs to the original author.

Source

Report Page